Humans fight back against robots mining personal finance data Source: Silla Brush
After spending billions of dollars on cutting-edge artificial intelligence technologies, Europe’s banks and insurers face tougher scrutiny of the tools they use to help root out fraud, check borrowers’ creditworthiness and automate claims decisions. European Union rules starting this week will stress human oversight and consumer protection, which may hamper companies trying to build the tools of the future.
“Companies developing AI technologies will have to consider and embed the data protection issues into the design process,” David Martin, senior legal officer at Brussels-based consumer advocate BEUC, said in an interview. “It’s not something where they can just tick a box at the end.”
The rules could present an obstacle to coders looking to design ever more sophisticated algorithms. That may handicap EU firms that are competing with rivals in U.S. and Asia to develop new technologies, according to Nick Wallace, a Brussels-based senior policy analyst at the Center for Data Innovation, a nonpartisan and nonprofit research institute.
“For an algorithmic model to be transparent to a human, even a human with a fairly good understanding of algorithms, it needs to be kept within a certain level of complexity,” Wallace said. “The more abstractions you have, let alone the more data points, the harder it’s going to be for any human being to sit down, read through all of it and scrutinize the decision.”
Regulators worldwide are trying to catch up with the financial industry’s rush to automate everything from trading desks to lending decisions and customer help-desks. The banking industry will invest $3.3 billion in AI and related technologies this year, making it the second-biggest spender after retail, research firm International Data Corporation estimates. Overall spending on the technologies will grow to $52.2 billion by 2021 from about $19 billion this year, according to IDC.
The EU’s General Data Protection Regulation being introduced on May 25 will generally require firms to get consent from people when their personal data is used to fully automate certain types of decisions that have significant effects, such as whether to award a loan. Clients will have the right to demand a firm’s human employee intervene and review a decision, and they will have the power to get details about an automated process to help guard against discriminatory practices.
“Major corporations recognize that this is a challenge and that privacy rights and data protection rights need to be given full consideration during the design and development of any kind of product or service,” John Bowman, London-based senior principal at International Business Machines Corp.’s Promontory Financial Group subsidiary, said in an interview.
Company Concerns
As policymakers ironed out the details of the regulations over the past year, financial industry lobbies including the Association for Financial Markets in Europe and U.K. Finance pressed authorities to tread softly and to acknowledge ways the technologies can benefit consumers. In a 24-page letter to policymakers, the European Banking Federation said “profiling activities should not necessarily be perceived as having a negative impact on customers.”
The law is being closely watched by the insurance industry, where four out of five executives say that AI systems will be used alongside human staffers within the next two years, consultant Accenture said in a report this year.
The U.K. arm of Ageas, a Brussels-based insurer, is looking to speed up the handling of thousands of claims for car insurance by using AI software to review images of vehicle damage and help estimate a repair job. GDPR won’t affect the current technology, and the insurer has included the law’s requirements in its processes, an Ageas spokeswoman said.
Allianz SE, Europe’s biggest insurer, uses data and machine-learning technologies in several areas of its insurance business. That includes automating what was once a paper-based and manual underwriting process for small- and medium-sized businesses.
Automatic decision-making is typically based either on consent or as a condition for entering a contract, according to Philipp Raether, chief privacy officer at Munich-based Allianz Group.
“In scenarios where profiling is necessary for entering into a contract, this will be made transparent to the customer in an understandable way,” Raether said.
| }
|