Grounding AI: Artificial Intelligence is Closer -- and Less Awesome -- than Most Source: Colin Wood
Artificial intelligence (AI) is not some Asimovian fantasy, nor an extravagance best left to starch-smocked scientists clinking beakers together in an underground laboratory. AI is an opportunity to create tools that save money, save lives and improve life in ways that can’t be measured.
A Computer by Any Other Name
A computer with a consciousness or human-like executive function doesn’t exist yet, so strong AI remains the purview of Hollywood and the Centre for the Study of Existential Risk. Today’s humanist concerns himself with weak AI applications, the kind of smart software with narrowly defined functionality that increasingly pervades daily life across economic classes.
Google uses AI to power speech and imaging processing, language translation, and answer billions of ambiguous search queries daily. Google also recently opened the source code for TensorFlow, an engine that will make it easier for people to insert AI modules into their applications, a landmark in artificial intelligence accessibility sure to elevate the competence of the average app. Tesla’s autopilot feature learns driver habits. Law enforcement agencies catch serial killers using models of honeybee movement patterns. Netflix users watch movies and Amazon users read books based on recommendations delivered by smart algorithms. Cybersecurity companies use AI to scan millions of simultaneous events in less than a second.
Game theory, machine learning, deep learning, reinforcement learning, neural networks, fuzzy logic, data analytics, data mining and two dozen more buzzwords represent the disciplines that constitute and relate to the field known as artificial intelligence. Distinctions between them exist, but are trivial for the end user and dull for everyone but philosophers and pedants. The term “AI” can be rightfully applied to any system that inhabits those corridors typically reserved for human wisdom and experience.
Today’s AI fills the computational gaps in human ability, and where computers fail to exercise executive function, humans are standing by to hold the flight controls, a symbiotic relationship and an augmentation of human endeavor that undermines the tale perpetuated by those with a flair for the dramatic. Guarding against a robotic uprising is prudent, but such Terminator-esque imagery distracts from the positive influence of today’s AI.
Climate change, rising sea levels, unsustainable population growth, pollution, Kanye West, disease, war, greed and willful ignorance could well combine forces to end humanity, but if AI is to have a role in that play, it’s not the role of bad guy. It’s that of a beacon that guides Earth to safety.
Not Carpenters
Lead poisoning is a dreadful fate made worse by the fact that it mostly affects children. If lead doesn’t kill a child, it can damage nearly every part of him. Lead can cause blindness, deafness, memory problems, headache, delirium, cognitive deficits, slurred speech, limb pain, altered skin color, impaired coordination, seizure and hallucination. The effects are irreversible, and each year, an estimated 600,000 children globally are diagnosed with cognitive disabilities caused by lead poisoning. About 140,000 children die from it.
The millions of buildings that were treated with lead paint before it was outlawed in the U.S. in 1978 make prevention an onerous task, said Rayid Ghani, research director at the University of Chicago’s Computation Institute and director at the Center for Data Science and Public Policy.
Agencies like the Chicago Department of Public Health deal with lead by waiting for a kid to get sick or get detected during routine screening, and then they go to the house and fix the problem. But a new research project run through a program called the Eric & Wendy Schmidt Data Science for Social Good Fellowship at the University of Chicago may soon allow cities to predict instead of react.
Rayid Ghani, research director, University of Chicago Computation Institute. Photo by Flickr/ CEBIRT Australia
Using today’s methods, the department’s chance of proactively finding lead is about 2 percent, rendering most proactive inspections pointless, Ghani said. Using a predictive model powered by 20 years of blood test data and home lead inspection records, researchers are “fairly confident” they can improve their proactive home inspection hit rate to 30 or 40 percent. A field trial running now will show if their initial testing is accurate, and potential integration with electronic medical records systems could flag a newborn’s home for inspection before the mother even leaves the hospital.
“Their job is not to go and fix things afterwards,” Ghani said. “It’s to prevent it from happening in the first place. I would argue this is the only way they should be doing this.”
The National Center for Biotechnology Information estimates that lead poisoning costs tens to hundreds of billions of dollars, and these models could reduce those figures while focusing efforts across all kinds of inspections, said Ghani. In fact, his group is now running a field trial with the Environmental Protection Agency to bring scientific rigor to choosing which sites are worthy of an inspector’s time.
Ghani is also working with police departments in Charlotte, N.C., Knoxville, Tenn., and Los Angeles to develop early detection systems that may help reduce adverse interactions with the public. It doesn’t take a computer to figure out if six officers are receiving 80 percent of the complaints, Ghani said — these models identify trends in data that warn of impending trouble and point to potential solutions. If officers are found routinely getting into scuffles after six or more domestic abuse calls, a department can take actions to mitigate risk like spreading those calls around or offering more training.
“The idea is to build the system in one department, test it out in three or four other departments — small, large, medium size — and then see what it takes to scale to a more national system,” Ghani said.
The Power of AI
Artificial intelligence systems can be placed in four common categories of use: prevention, resource prioritization, policy formation and benchmarking. But whatever the goal, even the most skillful or experienced people can enhance their work with AI, said Alan Krumholz, principal data scientist at G2 Web Services.
G2 makes software that does things like predict how likely it is that someone will default on a loan, or how likely it is that a given website contains illegal content, like child pornography or drugs. For the past decade, G2 researchers have been labeling websites as having or not having certain content. And the company is only able to forge such predictive models, Krumholz said, because it has tons of reliable label data and scientists who are skilled at the art of training their algorithms.
“If you look at where things are today, present-time tournaments, the best chess players playing alone and the best supercomputers playing alone cannot compete with regular chess players with regular computers helping them make decisions,” Krumholz said. “The team between computer and human is where the power of AI is.”
In September, a team of researchers at the University of Michigan led by Satinder Singh Baveja, electrical engineering and computer science professor, began a multi-year project funded by IBM to create an AI that students can talk to when they need advising.
“AI is making the biggest advances in things like speech recognition, computer vision problems and processing millions of images very fast,” Baveja said. “A lot of it’s driven by much faster processing, much cheaper processing and having much more data.”
Within a year, the team hopes to have an early version of the tool that students can use to receive a customized list of classes they should take based on their unique circumstances. Human advisers will remain essential, Baveja said, but humans suffer from constraints such as limited time and availability.
And while human advisers are good at recognizing contextual information like a student’s emotional state, even the most experienced adviser doesn’t have in mind a statistical overview of all student and class data enriched by concomitant patterns and trends. This system will be designed as a task-driven conversation simulator that asks questions and then draws from a massive database that includes information like the trajectories of past students, correlation between course and career, ratemyprofessors.com scores, historical feedback on course difficulty, and degree requirements.
The idea is to create a first point of contact for students that can be accessed at any time, and if they don’t get the help they need, then they can make an appointment with a human adviser. Improving the quality and accessibility of advising at a large institution like the University of Michigan, Baveja said, will help students feel better supported with customized advice that puts the school’s data stores to work.
If governments are using boxcutters to unpack their data today, AI is a blowtorch. Gathering even more data through Internet of Things devices is an enticing and common proposition these days, said Chad Kenney, chief performance officer at the Office of Performance and Data Analytics in Cincinnati, but cities still need to leverage the data they already have.
“I come at it from a very operational standpoint in that whatever insight is generated via analytics and data science, it’s really important to think through the whole value chain of that insight and figure out how that insight is actually going to be used to change behavior,” Kenney said. “Step one has been [to] figure out what the mission is of any given part of the organization and make sure we’re asking the right questions of that organization.”
Through a partnership with Ghani’s team at the University of Chicago, Cincinnati completed a pilot around blight prevention last summer. A predictive model drawing from data including property values, taxes, water shutoffs, citations, crime records and permits boosted proactive inspection hit rates from 43 to 78 percent, Kenney said. The city is now working to mature the model and embed it in its operations.
“Whereas a lot of municipal operations are still very reactive, I think this is the mechanism that has the power to help municipal governments become proactive,” Kenney said. “Government always gets this reputation for being 20-plus years behind the private sector, but in a weird way, I feel like municipal governments have an opportunity to be at the cutting edge, because we have access to all this interesting data, we have the really challenging problems, and we have a strong locus of control in that we can have a direct effect on quality of life if we’re leveraging this stuff properly.”
Government’s onus to fix society’s problems with one hand tied behind its back is what makes a tool like AI so enticing to today’s leaders. Predicting early which kids will drop out of school, spotting neighborhood crises before they explode and quelling the state’s opioid epidemic are the reasons Boston is now planning its first AI projects, said the city’s CIO, Jascha Franklin-Hodge.
“One of the big challenges in the human services space is that, inevitably, you don’t have enough resources to reach every possible person that you might want to reach,” he said. “How do you identify the places where you’re going to have the biggest impact? If we can only help 10 percent of the population that we’re trying to serve, what’s the 10 percent where it’s going to make the most difference to get some kind of intervention?”
AI can augment any endeavor, but it thrives where human expertise is scarce. Computer science and ornithology teams at Oregon State University and Cornell University are using AI to make predictions about continental bird migration patterns, a field with relatively few experts. Originally funded by a National Science Foundation grant in 2008, the team’s algorithms and processing capabilities are approaching the first opportunity at which the system could be used for decision-making, said Thomas Dietterich, Oregon State University distinguished professor of computer science and director of Intelligent Systems.
The system is powered by data gathered by bird watchers around the nation and submitted through Cornell’s eBird portal. The AI then accounts for things like length of travel per day, rest time before next travel based on the anecdotal bird spottings and historical data, and estimates when and where the birds will be. Outside of bird watching, reliable migration predictions can be used to plan wind turbine installation, shut off turbines to protect birds or inform the military where it might avoid flying at night. Similar partnerships already exist between bird watchers and the military in both Israel and the Netherlands, Dietterich said.
“We can answer our scientific questions without ‘private’ data for an individual bird, and the same techniques might be useful in other settings, like traffic, where you have anonymous count data but you would like to make inferences without violating anyone’s privacy,” he said. “Our ultimate goal is to get on the Weather Channel and just have our predictions out there as a [free] Web service that all these different potential customers could get access to.”
In post-Hurricane-Katrina New Orleans, urban blight remains an unwieldy task, but a new AI model launched under the city’s NOLAlytics program reduced a backlog of 1,800 cases to zero within 90 days of the program’s February 2015 launch. Through a partnership with pro-bono partner Enigma.io, the city found a solution to a tedious problem for which there weren’t enough qualified workers or hours in the day to evaluate each case.
New Orleans and Cincinnati are noth using AI models    to help reduce urban blight. Photo by Shutterstock.com.
The Code Enforcement Abatement Tool draws from a dozen criteria and generates a numeric score that tells the city whether a structure should be demolished or sold. This model is more rigorous and transparent, because previously it was just one person making decisions that weren’t necessarily quantifiable, said Oliver Wise, director of the New Orleans Office of Performance and Accountability.
“The algorithm is modeled to mimic what a human would do if they had lots and lots of time, because we don’t have lots of time,” said Wise. “There’s all sorts of areas where government decision-makers have to make hard decisions, but kind of repetitive decisions, and I think projects like this can be applied in a whole host of areas.”
Starting in 2011, a series of resource allocation models developed by a team at the University of Southern California began ensuring that security teams around the nation are making the best use of their limited resources. The U.S. Coast Guard, the Los Angeles Sheriff’s Department, the Federal Air Marshal Service, the Los Angeles Airport Police and the Transportation Security Administration use models developed by the team of Milind Tambe, computer science and engineering professor at the university.
The models schedule random security patrols in ports, airports and cities to avoid being predictable, while still prioritizing the most important locations. This kind of scheduling is already a complex task, Tambe said, but it’s made more difficult without AI because people are terrible at being random. The type of high-impact crime that these patrols are designed to prevent, like terrorism, makes it difficult to estimate the impact of using these models, but the time saved to human schedulers and improved resource allocation prove the tool’s worth, Tambe said. And the models are now being applied to similar disciplines, like anti-poaching efforts.
Tambe’s team is planning to begin a five-week pilot in January in cooperation with the university’s School of Social Work to understand which homeless people would be most effectively recruited for HIV education campaigns. Using an AI equipped with knowledge of the city’s homeless social networks is more effective than using heuristics like choosing the most popular individuals or those most centrally located in the network, said Tambe. Each week of the pilot, the team will recruit new peer leaders to provide information that will be fed back into the model to improve its accuracy and the campaign’s efficacy.
“We are very interested in things that will have a positive impact socially,” Tambe said. “Whether it’s protection of environment, forest, fish and wildlife, or problems such as water levels rising and climate refugees, to problems related to health, I think AI is going to really have a significant positive impact and allow us to assist humanity in solving some of the major challenges we face.”   
| }
|