TechNews Pictorial PriceGrabber Video Sat Nov 23 03:51:30 2024

0


Artificial intelligence is too powerful to be left to tech giants
Source: Hemant Taneja


Facebook CEO Mark Zuckerberg’s testimony before Congress made one thing clear: the government needs an Federal Artificial Intelligence Agency.

Facebook FB, +0.13%    is a canary in the proverbial AI coal mine. AI is going to play an enormous role in our lives and in the global economy. It is the key to self-driving cars, the Amazon AMZN, +0.79%    Alexa in your home, autonomous trading desks on Wall Street, innovation in medicine, and cyberwar defenses.

        AI could do an enormous amount of good and solve some of the world’s hardest problems, but that same power could be turned against us.

Technology is rarely good nor evil — it’s all in how humans use it. AI could do an enormous amount of good and solve some of the world’s hardest problems, but that same power could be turned against us. AI could be set up to inflict bias based on race or beliefs, invade our privacy, learn about and exploit our personal weaknesses — and do a lot of nefarious things we can’t yet foresee.

Which means that our policymakers must understand and help guide AI so it benefits society. In that regard, the Facebook sessions were alarming. The questions from senators made it clear they know little about AI, much less Facebook itself. Sen. Orrin Hatch didn’t realize Facebook makes its money by selling ads targeted at users based on their personal data, while Sen. Brian Schatz couldn’t quite grasp the concept of encryption. The senators repeatedly asked about data, but rarely about how Facebook’s AI exploits the data. To borrow a phrase, when Zuckerberg talked about Facebook’s algorithms, the assembled Senators were like dogs watching television.

Read: How the ‘#’ symbol has the power to upend your investments and your job

It’s not unusual for a powerful new technology to baffle our lawmakers. We don’t elect people because they’re experts in tech, we elect them because they are experts at governing. So, in the past, to help our officials manage an important emerging technology, the government has set up some sort of agency to provide expertise and oversight. For instance, World War II unleashed the ability to split atoms — a technology that created nuclear bombs, but also had the potential for nuclear energy. (There were even ideas at the time that atomic energy could be used to propel rockets and make your garden grow.) In 1946, Congress passed the Atomic Energy Act establishing the Atomic Energy Commission, which later evolved into the current Nuclear Regulatory Commission.

Now, the NRC also reveals the catch in such oversight. We don’t want overreaching regulation that goes beyond keeping us safe and ends up stifling innovation. Regulators helped make it so difficult to develop atomic energy, today the U.S. gets only 20% of its electricity from nuclear power.

So while we need a Federal Artificial Intelligence Agency, or FAIA, I would prefer to see it created as a public-private partnership. Washington should bring in AI experts from the tech industry to a federal agency designed to understand and direct AI and to inform lawmakers. Perhaps the AI experts would rotate through Washington on a kind of public service tour of duty.

        We’re at the beginning of a new era in government — one where governance is software-defined.

Importantly, we’re at the beginning of a new era in government — one where governance is software-defined. The nature of AI and algorithms means we need to develop a new kind of agency — one that includes both humans and software. The software will help monitor algorithms. Existing, old-school regulations that rely on manual enforcement are too cumbersome to keep up with technology and too “dumb” to monitor algorithms in a timely way.

Opinion: What Facebook and other tech leaders must do now to win back our trust

Software-defined regulation can monitor software-driven industries better than regulations enforced by squads of regulators. Algorithms can continuously watch emerging utilities such as Facebook, looking for details and patterns that humans might never catch, but nonetheless signal abuses. If Congress wants to make sure Facebook doesn’t exploit political biases, it could direct the FAIA to write an algorithm to look for the behavior.

It’s just as important to have algorithms that keep an eye on the role of humans inside these companies. We want technology that can tell if Airbnb hosts are illegally turning down minorities or if Facebook’s human editors are squashing conservative news headlines.

The watchdog algorithms can be like open-source software — open to examination by anyone, while the companies keep private proprietary algorithms and data. If the algorithms are public, anyone can run various datasets against them and analyze for “off the rails” behaviors and unexpected results.

        We can’t rely on companies to monitor and regulate themselves.

Clearly, AI needs some governance. As Facebook is proving, we can’t rely on companies to monitor and regulate themselves. Public companies, especially, are incentivized to make the biggest profits possible, and their algorithms will optimize for financial goals, not societal goals. But as a tech investor, I don’t want to see an ill-informed Congress set up regulatory schemes for social networks, search and other key services that then make our dynamic tech companies as dull and bureaucratic as electric companies. (I also don’t want to see U.S.-based companies increasingly regulated by other nations, which is essentially what’s happening with the European Union’s General Data Protection Regulation, which goes into effect May 25.)

Technology companies and policymakers need to come together soon and share ideas about AI governance and the establishment of a software-driven AI agency. IEEE, the powerful engineering society, is already working on guidelines it calls Algorithmic Bias Considerations to help with the “ethical alignment in intelligent and autonomous systems.” The resources are available. The industry stands ready.

Let’s do this before bad regulations get enacted — and before AI gets away from us and does more damage. We have a chance right now to tee up AI so it does tremendous good. To unleash it in a positive direction, we need to get the checks and balances in place right now.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |