TechNews Pictorial PriceGrabber Video Sun Dec 22 02:22:46 2024

0


Researchers Combat Gender and Racial Bias in Artificial Intelligence
Source: Dina Bass


When Timnit Gebru was a student at Stanford University’s prestigious Artificial Intelligence Lab, she ran a project that used Google Street View images of cars to determine the demographic makeup of towns and cities across the U.S.    While the AI algorithms did a credible job of predicting income levels and political leanings in a given area, Gebru says her work was susceptible to bias—racial, gender, socio-economic. She was also horrified by a ProPublica report that found a computer program widely used to predict whether a criminal will re-offend discriminated against people of color.

So earlier this year, Gebru, 34, joined a Microsoft Corp. team called FATE—for Fairness, Accountability, Transparency and Ethics in AI. The program was set up three years ago to ferret out biases that creep into AI data and can skew results.

“I started to realize that I have to start thinking about things like bias,” says Gebru, who co-founded Black in AI, a group set up to encourage people of color to join the artificial intelligence field. “Even my own Phd work suffers from whatever issues you'd have with dataset bias.”
Timnit Gebru
Source: Microsoft

In the popular imagination, the threat from AI tends to the alarmist: self-aware computers turning on their creators and taking over the planet. The reality (at least for now) turns out to be a lot more insidious but no less concerning to the people working in AI labs around the world. Companies, government agencies and hospitals are increasingly turning to machine learning, image recognition and other AI tools to help predict everything from the credit worthiness of a loan applicant to the preferred treatment for a person suffering from cancer. The tools have big blind spots that particularly effect women and minorities.

“The worry is if we don't get this right, we could be making wrong decisions that have critical consequences to someone's life, health or financial stability,” says Jeannette Wing, director of Columbia University's Data Sciences Institute.

Researchers at Microsoft, International Business Machines Corp. and the University of Toronto identified the need for fairness in AI systems back in 2011. Now in the wake of several high-profile incidents—including an AI beauty contest that chose predominantly white faces as winners—some of the best minds in the business are working on the bias problem. The issue will be a key topic at the Conference on Neural Information Processing Systems, an annual confab that starts today in Long Beach, California, and brings together AI scientists from around the world.

AI is only as good as the data it learns from. Let’s say programmers are building a computer model to identify dog breeds from images. First they train the algorithms with photos that are each tagged with breed names. Then they put the program through its paces with untagged photos of Fido and Rover and let the algorithms name the breed based on what they learned from the training data. The programmers see what worked and what didn't and fine-tune from there.

The algorithms continue to learn and improve, and with more time and data are supposed to become more accurate. Unless bias intrudes.

Bias can surface in various ways. Sometimes the training data is insufficiently diverse, prompting the software to guess based on what it “knows.” In 2015, Google's photo software infamously tagged two black users “gorillas” because the data lacked enough examples of people of color. Even when the data accurately mirrors reality the algorithms still get the answer wrong, incorrectly guessing a particular nurse in a photo or text is female, say, because the data shows fewer men are nurses. In some cases the algorithms are trained to learn from the people using the software and, over time, pick up the biases of    the human users.

AI also has a disconcertingly human habit of amplifying stereotypes. Phd students at the University of Virginia and University of Washington examined a public dataset of photos and found that the images of people cooking were 33 percent more likely to picture women than men. When they ran the images through an AI model, the algorithms said women were 68 percent more likely to appear in the cooking photos.

Eliminating bias isn’t easy. Improving the training data is one way. Scientists at Boston University and Microsoft's New England lab zeroed in on so-called word embeddings—sets of data that serve as a kind of computer dictionary used by all manner of AI programs. In this case, the researchers were looking for gender bias that could lead algorithms to do things like conclude people named John would make better computer programmers than ones named Mary.

In a paper called "Man is to Computer Programmer as Woman is to Homemaker?" the researchers explain how they combed through the data, keeping legitimate correlations (man is to king as woman is to queen, for one) and altering ones that were biased (man is to doctor as woman is to nurse). In doing so, they created a gender-bias-free public dataset and are now working on one that removes racial biases.

“We have to teach our algorithms which are good associations and which are bad the same way we teach our kids,” says Adam Kalai, a Microsoft researcher who co-authored the paper.

He and researchers including Cynthia Dwork—the    academic behind the 2011 push towards AI fairness—have also proposed using different algorithms to classify two groups represented in a set of data, rather than trying to measure everyone with the same yardstick. So for example, female engineering applicants can be evaluated by the criteria best suited to predicting a successful female engineer and not be excluded because they don't meet criteria that determine success for the larger group. Think of it as algorithmic affirmative action that gets the hiring manager qualified applicants without prejudice or sacrificing too much accuracy.

While many researchers work on known problems, Microsoft's Ece Kamar and Stanford University's Himabindu Lakkaraju are trying to find black holes in the data. These “unknown unknowns”—a condundrum made famous by former Secretary of Defense Donald Rumsfeld—are the missing areas in a dataset the engineer or researcher doesn't even realize aren't there.

Using a dataset with photos of black dogs and white and brown cats, the software incorrectly labeled a white dog as a cat. Not only was the AI wrong, it was very sure it was right, making it harder to detect the error. Researchers are looking for places where the software had high confidence in its decision, finding    mistakes and noting the features that characterize the error. That information is then provided to the system designer with examples they can use to retrain the algorithm.

Researchers say it will probably take years to solve the bias problem. While they see promise in various approaches, they consider the challenge not simply technological but legal too because some of the solutions require treating protected classes differently, which isn't legal everywhere.

What’s more, many AI systems are black boxes; the data goes in and the answer comes out without an explanation for the decision. Not only does that make it difficult figuring out how bias creeps in; the opaqueness also makes it hard for the person denied parole or the teacher labeled a low-performer to appeal because they have no way of knowing why that decision was reached. Google researchers are studying how adding some manual restrictions to machine learning systems can make their outputs more understandable without sacrificing output quality, an initiative nicknamed GlassBox. The Defense Advanced Research Projects Agency, or DARPA, is also funding a big effort called explainable AI.

The good news is that some of the smartest people in the world have turned their brainpower on the problem. “The field really has woken up and you are seeing some of the best computer scientists, often in concert with social scientists, writing great papers on it,” says University of Washington computer science professor Dan Weld. “There’s been a real call to arms.”


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |