TechNews Pictorial PriceGrabber Video Sun Nov 24 08:53:20 2024

0


Pre-crime, algorithms, artificial intelligence, and ethics
Source: Mark Gibbs


I just binge-listened to an outstanding podcast, LifeAfter, which, without giving too much away, is about artificial intelligence and its impact on people. Here's the show's synopsis:

        When you die in the digital age, pieces of you live on forever. In your emails, your social media posts and uploads, in the texts and videos you’ve messaged, and for some – even in their secret online lives few even know about. But what if that digital existence took on a life of its own? Ross, a low level FBI employee, faces that very question as he starts spending his days online talking to his wife Charlie, who died 8 months ago…

The ethical issues that this podcast raises are fascinating and riff on some of the AI-related issues we're starting to appreciate.

One of the big issues in the real world we're just getting to grips with lies in the way we humans create intelligent systems because whoever does the design and coding brings their own world views, biases, misunderstandings, and, most crucially, prejudices to the party.

A great example of this kind of problem in current AI products was discussed in a recent Quartz article, We tested bots like Siri and Alexa to see who would stand up to sexual harassment. The results of this testing are fascinating and, to some extent, predictable:

        Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, and Google’s Google Home peddle stereotypes of female subservience—which puts their “progressive” parent companies in a moral predicament … The message is clear: Instead of fighting back against abuse, each bot helps entrench sexist tropes through their passivity.

Now some AI apologists might argue that we're in the earliest days of this technology and the scope of what is required to deliver a general-purpose interactive digital assistant is still being explored so weaknesses and oversights are to be expected and will be fixed, all in good time. Indeed, given the sheer magnitude of the work, this argument doesn't, on the face of it, seem unreasonable but the long-term problem is to what extent these deficiencies will become "baked-in" to these products such that they can never be wholly fixed and subtle bias on a topic or position is often more effective in reinforcing belief and behavior than explicit support. Moreover, given that humans prefer to have their prejudices affirmed and supported and that to be really effective their digital assistants will have to learn what their masters want and expect, there's a risk of self-reinforcing feedback.

The danger of baked-in acceptance and even support of sexist tropes is obviously bad in intelligent assistants but when AI is applied to life-changing real-world problems, the subtlest built-in bias will become dangerous. How dangerous? Consider the non-AI, statistics-based algorithms that have for some years been used to derive "risk assessments" of criminals as discussed in Pro Publica's article Machine Bias, published last year. These algorithmic assessments – what are, essentially, "predictive policing" (need I mention "pre-crime"?) – determine everything from whether someone can get bail and for how much, to how harsh their sentence will be.

        [Pro Publica] obtained the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years, the same benchmark used by the creators of the algorithm.

        The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

        When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip. Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years.

That's bad enough but a sadly predictable built-in bias was revealed:

        In forecasting who would re-offend, the algorithm made mistakes with black and white defendants at roughly the same rate but in very different ways.

                The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.
                White defendants were mislabeled as low risk more often than black defendants.

The impetus to use algorithms to handle complex, expensive problems in services such as the cash-strapped court system is obvious and even when serious flaws are identified in these systems, there's huge opposition to stopping their use because these algorithms give the illusion of solving a high-level system problems (consistency of judgments, cost, and speed of process) even though the consequences to individuals (disproportionate loss of freedom) are clear to everyone and life-changing for those affected.

Despite these well-known problems with risk assessment algorithms there's absolutely no doubt that AI-based solutions that rely on Big Data and deep learning are destined to become de rigueur and the biases and prejudices baked-in to those systems will be much harder to spot.

Will these AI systems be more objective than humans in quantifying risk and determining outcomes? Is it fair to use what will be alien intelligences to determine the course of people's lives?

My fear is that the sheer impenetrability of AI systems, the lack of understanding by those who will use them, and the "Wow factor" of AI will make their adoption not an "if" but a "when" that will be much closer than we might imagine and the result will be a great ethical void that will support even greater discrimination, unfair treatment, and expediency in an already deeply flawed justice system.

We know that this is a highly likely future. What are we going to do about it?


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |