TechNews Pictorial PriceGrabber Video Mon Nov 25 15:48:56 2024

0


Finding Our Humanity With AI
Source: Adam Thierer




Fears about artificial intelligence have long made great fodder for sci-fi books and movies. But now some of those same fears are starting to influence public policy discussions about real world AI technologies. If such concerns bubble up into a full-blown technopanic, it could undermine AI's life-enriching – and in many cases life-saving – potential.

AI is already at work in our lives every day – in real-time traffic apps, shopping recommendations from online stores, fraud-detection notices from banks and even medical diagnostic tests. But AI holds the potential for an even more profound impact, much of which we cannot even predict in advance.

Critics, however, worry about AI leading to a variety of ills if let out into the wild without regulations already in place. Some fear economic disruptions, including job losses as robots and algorithmic systems get smarter. Others worry about privacy and security implications. Still others worry about how algorithms might discriminate against women and minorities.

Perhaps motivated by such fears, some scholars insist we need a new "Federal Robotics Commission" to help guide AI's development. A recent bill in Congress proposes a "Federal Advisory Committee on the Development and Implementation of Artificial Intelligence" to do just that.



The concerns about discrimination deserve careful consideration. Since the publication of a 2016 ProPublica study, discussion over the potential biases that machine learning and AI might generate has abounded. More recently research has raised concerns that AI may have racial or gender biases because of the use of old data.

AI is not perfect, and because it is created by imperfect individuals, it can learn our bad habits. Thus technologies, including AI systems, can often reflect the biases of their creators. But can AI also help us identify those biases?

Machines and algorithms can provide more objective standards to discover real-word bias and discrimination and combat its effects. Thus AI can be a useful tool to move toward a more fair and inclusive society.

Researchers are already working to correct potential algorithmic biases in a wide variety of innovative tools, ranging from biometric scanners to sentencing algorithms. If left to develop and innovate, researchers and entrepreneurs will be able to overcome such programming issues far more easily than we can "re-program" an individual's or society's biases.

For example, research presented at the 2016 American Bar Association annual meeting showed that prospective jurors were more likely to remember negative details about a defendant named Tyrone than one named William. None of the individuals in such studies directly expressed a favoritism for one race over another, but their actions showed that subconsciously they were acting with certain beliefs and could cause tragedies of justice.



As R Street Institute analyst Caleb Watney has written, these mistakes are tragedies whether they involve human bias or "algorithmic bias." AI can help solve this problem by removing some of our unconscious preferences and cataloging these biases to start conversations.

For example, AI provides a more objective starting point for a wide range of decisions including pre-trial release and calculating insurance premiums. Because of the grave consequences of potential errors, some human review of important AI decisions, such as those in the legal system or employment, might still be necessary to guard against biased algorithms.

Relying on an unbiased (or at least less biased) AI would also provide support for an individual facing discrimination, by showing that a more objective measure would have arrived at a different result. It is much easier to determine what factor led to an algorithm's conclusion than to accurately determine what was in the hearts and minds of an individual.

Some in the tech field, including innovators like Elon Musk, have expressed concerns that AI is advancing too rapidly for us to control – but a recent Mercatus Center paper noted that such concerns date back over 50 years and are more grounded in dystopian science fiction than reality. Policymakers should, therefore, be careful when intervening in a new, rapidly evolving sector like this.

Similarly, solutions such as open-source software can promote greater transparency regarding the "black box" in decision-making and allow innovators and entrepreneurs to identify and critique potential bias in each other's processes. Such critiques will likely lead to better outcomes and important debates free of government intervention.

AI alone can't solve all of our problems, but it may be able to provide solutions that are more objective than the ones we ourselves generate.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |