TechNews Pictorial PriceGrabber Video Sun Nov 24 23:54:12 2024

0


'Gaydar' shows just how creepy computer algorithms can get
Source: Cathy O'Neil


Artificial intelligence keeps getting creepier. In one controversial study, researchers at Stanford University have demonstrated that facial recognition technology can identify gay people with surprising precision, although many caveats apply. Imagine how that could be used in the many countries where homosexuality is a criminal offense.

The lead author of the “gaydar” study, Michal Kosinski, argues that he’s merely showing people what’s possible, so they can take appropriate action to prevent abuse. I’m not convinced.

When people hear about algorithms recognizing people through masks, finding terrorists and identifying criminals, they tend to think of dystopian movies like “Minority Report,” in which Tom Cruise prevented murders with the help of “precogs” — human beings with supernatural, albeit fatally flawed, foresight caused by a childhood neurological disease.

Reality is much worse. We don’t have precognition. We have algorithms that, although better than random guessing and sometimes more accurate than human judgment, are very far from perfect. Yet they’re being represented and marketed as if they’re scientific tools with mathematical precision, often by people who should know better.

This is an abuse of the public’s trust in science and in mathematics. Data scientists have an ethical duty to alert the public to the mistakes these algorithms inevitably make — and the tragedies they can entail.

That’s the point I made in a recent conversation with Kosinski, who is also known for creating the “magic sauce” psycho-profiling algorithm that Cambridge Analytica later adapted to campaign for both Brexit and Donald Trump.

His response was that we’re both trying to warn the world about the potential dangers of big data, but with different methods. He’s showing the world the “toy versions” of algorithms that can and surely are being built with bigger and better data elsewhere — and he doesn’t derive any income from the commercial applications. Academic prototyping, if you will.

I don’t buy that. It’s like complaining about the dangers of war while building bombs. Even “toy versions” can be very destructive when people put too much faith in them. And they do, which is why companies like Cambridge Analytica can make money peddling their secret sauce.

Consider the gaydar algorithm. A government could use it to target civilians, declaring certain people “gender atypical” and “criminally gay” because the black box says so — with no appeals process, because it’s “just math.” We’ve already seen this very scenario play out in other contexts, for example with algorithmic assessments of public school teachers. The difference is that instead of losing their jobs, people could lose their freedom — or worse.

People who work with big data must guard against this. Of course, oppressive regimes don’t need algorithms to be oppressive. But we shouldn’t allow them to appeal to the authority of math and science in doing so. We should make them do their nasty things in full view. We should expose their political nature, because political fights at least might look winnable in the long run.

When I asked Kosinski about this, he seemed more worried that the algorithm would work really well than about the false pretense of scientific authority. Maybe he thinks everyone understands the flaws. Maybe he believes it’s just a matter of time before the algorithms finding criminals and terrorists become much more accurate — although he did acknowledge that there’s little reason to think his gaydar results would translate to other countries.

I don’t think I convinced him to stop building creepy models for the sake of demonstrating how creepy things might get. So watch out.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |