TechNews Pictorial PriceGrabber Video Tue Nov 26 14:13:06 2024

0


Artificial intelligence can go wrong ? but how will we know?
Source: Mary Branscombe


Every time we hear that “artificial intelligence” was behind something �C from creating images to inventing recipes to writing a description of a photo �C we thought was uniquely human, you’ll see someone worrying about the dangers of AI either making humans redundant, or deciding to do away with us altogether. But the real danger isn’t a true artificial intelligence that’s a threat to humanity �C because despite all our advances, it isn't likely we’ll create that.


What we need to worry about is creating badly designed AI and relying on it without question, so we end up trusting “smart” computer systems we don't understand, and haven't built to be accountable or even to explain themselves.
Self-taught expert systems

Most of the smart systems you read about use machine learning. It’s just one area of artificial intelligence �C but it's what you hear about most, because it's where we're making a lot of progress. That’s thanks to an Internet full of information with metadata; services like Mechanical Turk where you can cheaply employ people to add more metadata and check your results; hardware that's really good at dealing with lots of chunks of data at high speed (your graphics card); cloud computing and storage; and a lot of smart people who've noticed there is money to be made taking their research out of the university and into the marketplace.

Machine learning is ideal for finding patterns and using those to either recognize, categorize or predict things. It's already powering shopping recommendations, financial fraud analysis, predictive analytics, voice recognition and machine translation, weather forecasting and at least parts of dozens of other services you already use.

Outside the lab, machine learning systems don’t teach themselves; there are human designers, telling them what to learn. And despite the impressive results from research projects, machine learning is still just one piece of how computer systems are put together. But it's far more of a black box than most algorithms, even to developers -- especially when you’re using convolutional neural networks, commonly known as “deep learning” systems.

“Deep learning produces rich, multi-layered representations that their developers may not clearly understand,” says Microsoft Distinguished Scientist Eric Horvitz, who is sponsoring a 100-year study at Stanford of how AI will influence people and society, looking at why we aren't already getting more benefits from AI, as well as concerns AI may be difficult to control.

The power of deep learning produces “inscrutable” systems that can’t explain why they made decisions, either to the user working with the system or someone auditing the decision later. It’s also hard to know how to improve them. “Backing up from a poor result to ‘what’s causing the problem, where do I put my effort, where do I make my system better, what really failed, how do I do blame assignments,’ is not a trivial problem,” Horvitz explains; one of his many projects at MSR is looking at this.

In some ways, this is nothing new. “Since the start of the industrial revolution, automated systems have been built where there is an embedded, hard-to-understand reason things are being done,” Horvitz says. “There have always been embedded utility functions, embedded design decisions that have tradeoffs.”

With AI, these can be more explicit. “We can have modules that represent utility functions, so there’s a statement that someone has made a tradeoff about how fast a car should go or when it should slow down or when it should warn you with an alert. Here is my design decision: You can review it and question it.” He envisages self-driving cars warning you about those trade-offs, or let you change them �C as long as you accept liability.

Getting easier to understand systems, or ones that can explain themselves, is going to be key to reaping the benefits of AI.

It’s naïve to expect machines to automatically make more equitable decisions. The decision-making algorithms are designed by humans, and bias can be built in. When the algorithm for a dating site matches men with only women who are shorter than them, it perpetuates opinions and expectations about relationships. With machine learning and big data, you can end up automatically repeating historical bias in the data you’re learning from.   

When a CMU studyfound ad-targeting algorithms show ads about high-paying jobs to men more than to women, it might have been economics rather than assumptions; if more ad buyers target women, car companies or beauty products could out-bid recruiters. But unless the system can explain why, it looks like discrimination.

The ACLU has already raised questions about whether online ad tracking breaks the rules of the Equal Credit Opportunity Act and the Fair Housing Act. And Horvitz points out machine learning could sidestep privacy protections for medical information in the American Disability Act and the Genetic Information Non Discrimination Act that prevent it being used in decisions about employment, credit or housing, because it can make “category-jumping inferences about medical conditions from nonmedical data.”

It’s even more of an issue in Europe, he says. “One thread of EU law is that when it comes to automated decisions and automation regarding people, people need to be able to understand decisions and algorithms need to explain themselves. Algorithms need to be transparent.” There are currently exemptions for purely automatic processing, but the forthcoming EU data privacy regulation might require businesses to disclose the logic used for that processing.

The finance industry has already had to start dealing with these issues, says Alex Gray, CTO of machine learning service SkyTree, because it’s been using machine learning for years, especially for credit cards and insurance.

“They've got to the point where it affects human lives, for example by denying someone credit. There are regulations that force credit card companies to explain to the credit applicant why they were denied. So, by law, machine learning has to be explainable to the everyman. The regulation only exists for the financial industry but our prediction is you will see that everywhere, as machine learning inevitably and quickly makes its way into every critical problem of human society.”

Explanations are obviously critical in medicine. IBM Watson CTO Rob High points out “It’s very important we be transparent about the rationale of our reasoning. When we provide answers to a question, we provide supporting evidence for a treatment suggestion and it’s very important for the human who receives those answers to be able to challenge the system to reveal why it believed in the treatment choices it suggested.”

But he believes it’s important to show the original data the system learned from, rather than the specific model it used to make the decision. “The average human being is not well-equipped to understand the nuance of why different algorithms are more or less relevant,” he says, “but they can test them quickly by what they produce. We have to explain in a form the person who is an expert in that field will recognise, not show that it’s justified by the mathematics in the system.”

Medical experts often won’t accept system that don’t make sense to them. Horvitz found this with a system that advised pathologists on what tests to run. The system could be more efficient if it wasn’t constrained to the hierarchies we used to categorise disease but the users disliked it until it was changed to work in a more explicable way. “It wouldn’t be as powerful, it would ask more questions and do more tests but the doctor would say ‘I get it, I can understand this and it can really explain what it’s doing.”

Self-driving cars will also bring more regulation to AI, says Gray. “Today, a bunch of that [self-driving system] is neural networks and it’s not explainable. Eventually, when a car hits somebody and there's an investigation, that issue will come up. The same will be true of everywhere that’s high value, which affects people or their businesses; there's going to have to be that kind of explainability.”

In future, Gray says machine learning systems may need to show how the data was prepared and why a particular machine learning model was chosen. “You’ll have to explain the performance of the model and its predictive accuracy in specific situations.”

That might mean compromises between how transparent a model is and how powerful it is. “It's not always the case that the more powerful methods are less transparent but we do see those trade-offs,” says Horvitz. “If you push very hard to get transparency, you will typically weaken the system.”

As well as the option of making systems more explainable, it’s also possible to use one machine learning system to explain another. That’s the basis of a system Horvitz worked on called Ask MSR. “When it generated an answer it could say here's the probability it's correct,” he says �C and it’s a trick Watson uses too. “At a metalevel, you’re doing machine learning about a complex process you can’t see directly to characterize how well it's going to do.”

Ryan Caplan, CEO of ColdLight, which builds AI-based predictive analytics, suggests systems may ask how much they will need to explain before they give you an answer. “Put the human being in control by asking ‘do you need to legally explain the model or do you need the best result?’ Sometimes it’s more important to have accuracy over explainability. If I’m setting the temperature in different areas of an airport, maybe I don’t need to explain how I decide. But in many industries, like finance, where a human has to be able to explain a decision, that system may have to be curtailed to certain algorithms.”
Accessibility not fragility

Hector Yee, who worked on AI projects at Google before moving to AirBnB, insists that “machine learning should involve humans in the loop somewhere.” When he started work on AirBnB’s predictive systems he asked colleagues if they wanted a simple model they could understand or a stronger model they wouldn’t. “We made the trade-off early on to go human interpretable models,” he says, because it makes dealing with bugs and outliers in the data far easier.

“Even the most perfect neural net doesn’t know what it doesn’t know. We have a feedback loop between humans and machine learning; we can look at what the machine has done and what we need to do to add features that improve the model. We know what data we have available. We can make an informed decision what to do next. When you do that, suddenly your weaker model becomes stronger.”

Patrice Simard of Microsoft Research is convinced that applies beyond today’s PhD-level machine learning experts. His goal is “to democratise machine learning and make it so easy to use my mother could build a classifier with no prior knowledge of machine learning.”

Given the limited number of machine learning experts, he says the best way to improve machine learning systems is to make them easier to develop. “You can build a super smart system that understands everything or you can break it down into a lot of multiple tasks and if each of these tasks can be done in an hour by a person of normal expertise, we can talk about scaling the numbers of contributors instead of making one particular algorithm smarter.”

When he was running Bing Ad Center, he abandoned a complex but powerful algorithm for something far simpler. “It took a week to train 500 million parameters using 20 machines and every time something went wrong people pointed to the algorithm and we had to prove it was computing the right thing �C and then a week later, the same thing would happen again. I replaced it with a very simple algorithm that was similar in performance but could train in a matter of minutes or hours.” It was easier to understand, easier to develop and there were no more time-wasting arguments about whether the algorithm was wrong.

Being able to retrain quickly is key to keeping machine learning systems current, because the data feeding into machine learning systems is going to change over time, which will affect the accuracy of the predictions they make. With too complex a system, Simard warns “You’ll be stuck with an algorithm you don’t understand. You won’t know if you can keep the system if no-one has the expertise to tell you whether it still works. Or you might have one system that depends on another and one of those systems gets retrained. Can you still rely on it?”

And if AI is really effective, it’s going to change our world enough that it will have to evolve to keep up, Horvitz points out. A system to identify patients at risk of hospital readmission that keeps them out of the emergency room, will change the mix of patients it has to assess.

On the one hand, AI systems need to know their limitations. “When you take a system and put out in the real open world, there are typically many unforeseen circumstances that come up. How do you design systems that one explicitly understand they're in an open world and explicitly know that the world is bigger than their information?”

But, on the other hand, they also need to know their own impact. “The AI systems themselves as we build them have to understand the influences they make in the world over time, and somehow track them. They have to perform well, even though they’re changing the world they're acting in.”


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |