Risky AI business: Navigating regulatory and legal dangers to come Source: Bob Violino
Artificial intelligence (AI) is cropping up everywhere in business, as organizations deploy the technology to gain insights into customers, markets and competitors, and to automate processes in almost every facet of operations.
But AI presents a wide range of hidden dangers for companies, especially in areas such as regulatory compliance, law, privacy and ethics. There is little visibility into how AI and machine learning technologies come to their conclusions in solving problems or addressing a need, leaving practitioners in a variety of industries flying blind into significant business risks.
The concerns are especially relevant for companies in industries such as healthcare and financial services, which have to comply with a number of government and industry regulations.
“Context, ethics, and data quality are issues that affect the value and reliability of AI, particularly in highly regulated industries,” says Dan Farris, co-chairman of the technology practice at law firm Fox Rothschild, and a former software engineer who focuses his legal practice on technology, privacy, data security, and infrastructure matters. “Deploying AI in any highly regulated industry may create regulatory compliance problems.”
Risky AI business
Financial technology companies are investing heavily in AI, but the losses and/or administrative actions that might result are potentially catastrophic for financial services companies, Farris says. “If an algorithm malfunctions, or even functions properly but in the wrong context, for example, there is a risk of significant losses to a trading company or investors,” he says.
Healthcare also provides particularly compelling examples of where things can get troublesome with AI. “Recognition technology that can help identify patterns or even diagnose conditions in medical imaging, for example, is one way that AI is being deployed in the healthcare industry,” Farris says. “While image scanning may be more accurate when done by computers versus the human eye, it also tends to be a discrete task.”
Unlike a physician, who might have the value of other contextual information about a patient, or even intuition developed over years of practice, the results from AI and machine learning programs can be narrow and incomplete. “Reliance on such results without the benefit of medical judgment can actually cause worse patient outcomes,” Farris says.
And like humans, machines will make mistakes, “but they could be different from the kinds of mistakes humans make such as those arising from fatigue, anger, emotion, or tunnel vision,” says Vasant Dhar, professor of information systems at New York University and an expert on AI and machine learning.
“So, what are the roles and responsibilities of humans and machines in the new world of AI, where machines make decisions and learn autonomously to get better?” Dhar says. “If you view AI as the ‘factory’ where outputs [or] decisions are learned and made based on the inputs, the role of humans is to design the factory so that it produces acceptable levels of costs associated with its errors.”
When machines learn to improve on their own, humans are responsible for ensuring the quality of this learning process, Dhar says. “We should not trust machines with decisions when the costs of error are too high,” he says.
The first question for regulators, Dhar says, is do state-of-the-art AI systems — regardless of application domain — result in acceptable error costs? For example, transportation regulators might determine that since autonomous vehicles would save 20,000 lives a year, the technology is worthwhile for society. “But for insurance markets to emerge, we might need to consider regulation that would cap damages for errors,” he says.
In the healthcare arena, the regulatory challenges will depend on the application. Certain areas such as cataract surgery are already performed by machines that tend to outperform humans, Dhar says, and recent studies are finding that machines can similarly outperform radiologists and pathologists.
“But machines will still make mistakes, and the costs of these need to be accounted for in making the decision to deploy AI,” Dhar says. “It is largely an expected value calculation, but with a stress on ‘worst case’ as opposed to average case outcomes.”
In the future, as machines get better through access to genomic and fine-grained individual data and are capable of making decisions on their own, “we would similarly need to consider what kinds of mistakes they make and their consequences in order to design the appropriate regulation,” Dhar says.
Legal issues to consider
In addition to regulatory considerations, there are legal ramifications for the use of AI.
“The main issue is who will be held responsible if the machine reaches the ‘wrong’ conclusion or recommends a course of action that proves harmful,” says Matt Scherer, an associate with the international labor and employment law firm Littler Mendelson P.C., where he is a member of the robotics, AI, and automation industry group.
For example, in the case of a healthcare-related issue, is it the doctor or healthcare center that’s using the technology, or the designer or programmer of the applications who’s responsible? “What if the patient specifically requests that the AI system determine the course of treatment?” Scherer says. “To me, the biggest fear is that humans tend to believe that machines are inherently better at making decisions than humans, and will blindly trust the decision of an AI system that is specifically designed for the purpose.”
Someone at the organization using AI will need to take accountability, says Duc Chu, technology innovation officer at law firm Holland & Hart. “The first issues that come to mind when artificial intelligence or machine learning reach conclusions and make decisions are evidence, authentication, attestation, and responsibility,” he says.
In the financial industry for instance, if an organization uses AI to help pull together information for financial reports, a human is required to sign and attest that the information presented is accurate and what it purports to be, and that there are appropriate controls in place that are operating effectively to ensure the information is reliable, Chu says.
“We then know who the human is who makes that statement and that they are the person authorized to do so,” Chu says. “In the healthcare arena, a provider may use [AI] to analyze a list of symptoms against known diseases and trends to assist in diagnosis and to develop a treatment plan. In both cases, a human makes the final decision, signs off on the final answer, and most importantly, is responsible for the ramifications of a mistake.”
Since AI — and in particular neural networks — are not predictable “it raises significant challenges to traditional [tort law], because it is difficult to link cause and effect in a traditional sense, since many AI programs do not permit a third party to determine how the conclusion is used,” says Mark Radcliffe, a partner at DLA Piper, a global law firm that specializes in helping clients understand the impact of emerging and disruptive technologies.
“The traditional tort theory requires ‘proximate cause’ for liability,” Radcliffe says. “The tort ‘negligence’ regime applies a reasonable man standard, which is very unclear in the context of software design. Another issue is whether the AI algorithms introduce ‘bias’ into results based on the programming.”
Best practices for safe AI
Organizations can do a number of things to guard against the legal and compliance risks related to AI.
One key requirement is to have a thorough grasp of how machines make their decisions. That means understanding that legislatures and courts and juries are likely to frown on the creation and deployment of systems whose decisions cannot properly be understood even by their designers, Scherer says.
“I tend to think that the black box issue can be addressed by making sure that systems are extensively tested before deployment, as we do with other technologies — such as certain pharmaceuticals — that we don't fully understand,” Scherer says. “I think in practice, and on a macro-scale, it will be a process of trial and error. We will figure out over time which decisions are better left to humans and which are better made by computers.”
Companies need to consider whether they can design a system to “track” the reasoning at a level which would satisfy regulators and legal thresholds, Radcliffe says. “Regulators may encourage such transparency through rules of liability and other approaches,” he says.
Enterprises should be participating in the rule making with the relevant regulatory agencies that are developing the rules governing their operations, to ensure that they are realistic. “Government agencies cannot make practical rules without the real-world input from the industry,” Radcliffe says.
Involvement should also extend to participation in industry organizations in developing industry specific rules for AI. “Government will be reactive and may make impractical rules based on a single incident or series of incidents,” Radcliffe says. “Companies need to work with industry organizations and government regulatory agencies to avoid those knee-jerk responses.”
Organizations should also be well versed in knowing when it’s safest to rely on AI conclusions vs. human decision making when liabilities are a factor. “This concern will vary by industry and even within industries,” Radcliffe says.
For example, the use of AI by internists to assist in a patient diagnosis is a much lower risk than the use of AI in powering robotic surgeons, Radcliffe says. “For high-risk activities such as medicine, which also has a mature legal regulatory regime, companies will need to work with regulators to update the rules to apply to this new approach.”
In addition, companies need to consider how to allocate liability between themselves and their customers and business partners for the use of AI.
“For example, if a company develops an AI application for a bank, the parties would consider who would be liable if the AI program creates regulatory problems, such as ‘redlining,’ or makes mistakes [such as miscalculating payments], since such issues have no precedent and the parties can design the allocation of liability between them in the contract,” Radcliffe says.
AI is improving when it comes to neural networks, but companies that want to control regulatory and legal risk should continue to rely on AI as one factor among many within a human decision-making process, Farris says.
“AI is a tool, one that humans control and must continue to deploy in thoughtful ways,” Farris says. “Companies that want to successfully deploy AI should first invest in data sets and data quality. Evaluating the quality and fitness of data for AI applications is an important first step. Parsing the tasks and decisions that are ripe for machine learning, and those which continue to require human input, is the next major hurdle.”
| }
|