TechNews Pictorial PriceGrabber Video Sun Nov 24 10:33:56 2024

0


Here's The One Thing That Makes Artificial Intelligence So Creepy For Most Peo
Source: Lauren deLisa Coleman



People behind the AI curtain. In this photo, Jessica McShane, an employee at Interactions Corp., foreground, monitors person-to-computer communications, helping computers understand what a human is saying, in the "intent analysis" room at the company's headquarters in Franklin, Mass. “That information is used to feedback into the system using machine learning to improve our model,” said Robert Nagle, Interactions’ chief technology officer. “Next time through, we’ve got a better chance of being successful.” (AP Photo/Steven Senne)ASSOCIATED PRESS

In this Oct. 31, 2018, photo, a screen displays a computer-generated image of a Watrix employee walking during a demonstration of their firm's gait recognition software at their company's offices in Beijing. A Chinese technology startup hopes to begin selling software that recognizes people by their body shape and how they walk, enabling identification when faces are hidden from cameras. Already used by police on the streets of Beijing and Shanghai, “gait recognition” is part of a major push to develop artificial-intelligence and data-driven surveillance across China, raising concern about how far the technology will go. (AP Photo/Mark Schiefelbein)ASSOCIATED PRESS

As many businesses prepare for the coming year, one of the key priorities is determining best use case and strategic implementation of artificial intelligence as it applies to the core competencies of the company. This is a fairly challenging area on a variety of levels. But as this work occurs, one of the most important narratives in the arena is also further coming to light. That is, discussions around this emerging tech space as it directly intersects with ethics, culture, integrity and, quite frankly, the unconscious makeup of just who might be minding the AI store as it is being developed.

The problem is, however, currently there are very few solid directives or templates or litmus tests for such growing concerns related to what is perceived as the "move fast and break things" mindset observed by many in about the AI enterprise sector leaving one to wonder if the few, wise thought-leaders in the space will be truly heard and heeded or simply be drowned out by the promise of power and control through the expanding, amorphous specter that is AI.

Indeed, what does owning responsibility of AI mean, look like and where does that buck actually stop?

Such is the backdrop that made for a particularly intriguing stage conversation during the recent AI Summit in New York City recently. Billed as the world's first and largest conference and exhibit to look at the practical implications of AI for enterprise organizations, the event brought together executives from Google to NBC Universal to Microsoft to IBM and many, many more as they flocked to discuss, demo, deal-make and learn about all things AI.    In its third year, the conference offered a number of C-suite speakers from major companies but one of the most provocative and troubling talks was that of a panel exchange entitled "Responsible AI: Setting the foundations for a fair, ethical, diverse AI proposition in your organization."

Issues around tracking data, public policy, integrity of the actual work being performed, the question of the reliability of such work and its impact on and benefit to society were only some of the main points of discussion that brought together a number of thought-leaders on the troubling matter of responsibility and AI, what area of a company should govern AI ethics, what those ethics should be and the massive tangle of man and machine. And, according to the blank looks after this panel's conclusion, there is no consensus or industry-wide code of ethics in sight.

To further complicate matters, the question of how to attract a diversity of employees to enter the AI space is that which noted as a challenge. And then even once onboard there are further concerns about the health and well-being of such employees given the fact that we are still not exactly sure what impact tracking, drilling into and analyzing patterns, say, identified by machines will actually do to the humans sifting through such data - particularly if that data is negative in some manner - hour after hour, day after day.


Then there are the very real challenges of being able to access and use representative data to either prove or disprove such benefit or harm by AI. For example, Jane Nemocova, Vice President and General Manager of Global Services for Machine Intelligence at Lionbridge spoke about the fact that the same policies created to help prevent discrimination are those which prevent access to solid and plentiful representative data to determine how certain uses of AI may adversely impact certain demographics of employees or various segments of society overall.    Thus Lionbridge, itself, seeks to form solid partnerships so that no rules are broken, but quite naturally, this all takes time.

Hilary Mason, General Manager for Machine Learning, Cloudera noted on the panel that industry information on guidelines around ethics is spotty at best if one is not working within AI and the academic realm. "The question becomes, 'how do you even ask these ethical questions?' " she stated. So Mason says that Cloudera is creating tools so that there is at least a framework for which ethical questions to ask so that businesses have something to which to refer. And this seems to be urgently needed by the industry. Indeed, the frightening thing is that when Mason asked the audience to indicate by a show of hands how many conduct code reviews or bias risk nearly no raised hand was visible.

Chris Wiggins, Chief Data Scientist, at the New York Times said the Times tries to make the choice to operate in an ethical manner easy so that people will simply want to do it but that such rules will need to change as the technology changes. "The key," said Wiggins, "is to have your standards linked to the company's principles so that when and if there is conflict, you can easily review back to your true north."

Aside from such general self-policing, the panel noted that one of the key elements that is missing in AI as it pertains to enterprise today is that there is just no incentive to encourage people to ask the largest ethical questions. But it's even deeper than this.    Once the questions are asked, it seems that it is not clear who should sign off on the ethics matrix.

One panelist well-noted that such concerns should not sit solely with a data scientist but rather an entire team such as a customer service representative, CFO, legal and more to truly discuss values, federal policy and more. So far, this area seems to be a wildcard with businesses either figuring it out, or not, as they go. The panel mentioned that what we are all dealing with are very high-impact algorithms some of which actually have laws already in place for various sectors that are not currently being checked. According to the panel, one of the biggest next-level conversations in AI will be what it does it look like to have an algorithm be compliant with, for example, current, real-world, anti-discrimination laws.

Yet there are tons and tons of algorithms are already well in motion and reiterating as you read, without such questions being asked.

Maya Wiley Senior Vice President for Social Justice, The New School added that the conversation around AI should be expanded beyond concerns about harm or prevention but to include that which asks more about how business can actually use AI to drive more equity in society. "This is an opportunity for industry to actually think proactively about a space and to consider strong study about ethnography as it intersects with business," she said.

Wiley believes that many of the issues of concern about AI stem from the fact that there is nary an engineer or company looking at how the results of a particular algorithm actually affect a certain individual or user group of individuals, how they felt, how it may have adversely impacted their lives or livelihood or any number of scenarios.

Think about that for a moment. For example, the photo and caption just above under the headline for this story demonstrate an AI-based tech that goes beyond face recognition. It would be interesting to know how and if Watrix is considering tracking the actual impact of individuals who are forced to interact with police based on accurate or inaccurate identification, if sentencing received is actually fair, how to help make for a more just system, how black markets will probably spring up to counteract such software and so very much more.

"And," Wiley continues,    "such lack of consideration around the psycho-social implications around AI and probably will create many problems in our culture because of implicit bias that is present within those who are developing or working with the various level of AI. You can’t be less bias if you don’t know about subconscious bias, bias plays a part in the development, so understanding about the presence of such elements needs to be at the forefront of conversations and is not, yet, for the most part."

As asked by the panel, what does it mean to trust something that is mathematical? Moreover, what does it mean to trust and/or be subjected to something that is mathematical that also seemingly has very rules and accountability made by people who seem to actually have no real guidelines at the present moment?


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |