TechNews Pictorial PriceGrabber Video Sun Dec 22 08:40:06 2024

0


Artificial Intelligence May Reflect the Unfair World We Live in
Source: Sophia Arakelyan


We've all heard Elon Musk speak with foreboding about the danger AI poses -- something he says may potentially bring forth a Third World War: This is of course the power of artificial intelligence (AI).

But let's put aside for a moment Musk's claims about the threat of human extinction and look instead at the present-day risk AI poses.

This risk, which may well be commonplace in the world of technology business, is bias within the learning process of artificial neural networks.

This notion of bias may not be as alarming as that of "killer" artificial intelligence -- something Hollywood has conditioned us to fear. But, in fact, a plethora of evidence suggests that AI has developed a system biased against racial minorities and women.

The proof? Consider the racial discrimination practiced against people of color who apply for loans. One reason may be that financial institutions are applying machine-learning algorithms to the data they collect    about a user, to find patterns to determine if a borrower is a good or bad credit risk.   

Think, too, about those AI-powered advertisements that portray the best jobs being performed by men, not women. Research by Carnegie Mellon University showed that in certain settings, Google online ads promising applicants help getting jobs paying more than $200,000 were shown to significantly fewer women than men.

That raised questions about the fairness of targeting ads online.

And, how about Amazon’s    refusal to provide same-day delivery service to certain zip codes whose populations were predominantly black?

Factors like these suggest that human bias against race and gender has been transferred by AI professionals into machine intelligence. The result is that AI systems are being trained to reflect the general opinions, prejudices and assumptions of their creators, particularly in the fields of lending and finance.

Because of these biases, experts are already striving to implement a greater degree of fairness within AI. "Fairness" in this context means an effort to find, in the data, representations of the real world. These, in turn, can help model predictions which will follow a more diverse global belief system that doesn't discriminate with regard to race, gender or ethnicity.   
Research

There is research backing up the threat: In 2014, the Journal of Personality and Social Psychology published the results of an experiment conducted by Justin Friesen, Troy Campbell and Aaron Kay. The experiment demonstrated that people have the tendency to strongly adhere to their beliefs. Even in the face of contradictory logic and scientific evidence, test subjects in the experiment steadfastly refused to change their opinions, and instead claimed moral superiority.

During the experiment, several unsubstantiated statements were made by the test subjects which had no grounding in common sense, but somehow allowed these people to maintain their version of the truth. More than anything, apparently, they -- and people in general -- have the desire to always be correct.

The issue here might be caused by the data scientists responsible for the training of the artificial "neural networks’’ involved. Neural networks are interconnected groups of nodes similar to the vast network of neurons in a human's or animal's brain. The data scientists are "training" them for organizations possessing    tremendous social impact and presence.

So, the implication is that these scientists may unconsciously be transferring to the AI they develop their personal core-belief structures and opinions regarding minorities, gender, and ethnicity.

In addition, the technological leaps being made aren't always clear to us, to begin with: Neural networks, in fact, remain a black box:    It’s hard to comprehend why they make they make the predictions they do -- and this remains a major area for ongoing research.
What could be done

One of the methods being used to gain more insight into the problem is the use of attention-based, neural-network architectures that help shed some light on what the network actually “sees” and focuses on when it makes predictions. Research like this entails a practical, research-based approach.

Another potential solution (one that's potentially "fair") might entail legally requiring across-the-board transparency of financially and socially significant organizations that regularly employ machine-learning in their decision-making procedures.

Under such a scenario, organizations would be forced to reveal their modus operandi for data manipulation and results-generation, in order to deter other organizations from following in their footsteps and further negatively impacting society.

Still, even if full transparency were to be applied and data scientists sincerely tried to feed neural networks with correct data, injustice would still exist in the world at large. In business, for example, women hold fewer C-level positions in companies than men, and African Americans earn less on average than whites. This is the reality in our culture, and unfair neural nets are simply its byproduct.

So, our dilemma is whether to "adjust" neural networks    to make them more fair in an unfair world, or address the prime causes of bias and prejudice in real life and give more opportunities to women and minorities. In this way we would (hopefully) see data naturally improve, to reflect those positive trends over time.

A truly progressive society should opt for the second option. Meanwhile, entrepreneurs already in, or contemplating getting into, the AI field might want to develop AI products and deal with the data collected from millions of users in a way that's cognizant of the potential biases it might contain. In other words, they need to ask if bias exists in their businesses and be more conscious of such scenarios.

The issue of bias carries reminders of the era when home computer use became the norm, and when many hacking attacks occurred. Back then, ethical hackers appeared, and exposed vulnerabilities in systems -- and they're still on the case.

Similarly, in this age of AI development, ethical groups of AI experts can and should step in to expose biases to save us from those "monster" scenarios of societal damage such biases could bring.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |