MIT Develops Machine Learning AI To Detect Cyberattacks Source: Jef Cozza
A new artificial intelligence platform developed by MIT and PatternEx can identify up to 85 percent of cyberattacks, according to a new research paper. Dubbed AI2, the platform is said to be significantly better at predicting cyberattacks than similar systems because it continuously incorporates new input provided by human experts.
“Today’s security systems usually fall into one of two categories: man or machine," Adam Conner-Simon from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wrote in a post on the MIT News site.
"So-called ‘analyst-driven solutions’ rely on rules created by human experts and therefore miss any attacks that don’t match the rules," he said. "Meanwhile, today’s machine-learning approaches rely on ‘anomaly detection,’ which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.” The MIT and PatternEx platform attempts to merge those two approaches.
An Automated Analyst
AI2 predicts attacks by combing through data and detecting suspicious activity by clustering it into meaningful patterns using unsupervised machine learning, according to researchers at MIT. It then presents the activity to human analysts who confirm which events are actual attacks. AI2 then incorporates that feedback into its models for the next set of data.
“You can think about the system as an automated analyst,” said CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo (pictured above), a chief data scientist at PatternEx and a former CSAIL postdoctoral associate. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.” Veeramachaneni presented a paper about the system at last week’s IEEE International Conference on Big Data Security in New York City.
Machine learning algorithms typically rely on the work of many individuals helping to “teach” them how to identify the relevant data. But the advanced technical nature of threat analysis makes it difficult for anyone who isn’t an expert in data security to contribute. With such experts in high demand and with little time to spare to pore over mountains of data, finding less labor-intensive ways to develop security algorithms has been crucial.
Combining Expert Analysis with Machine Learning
AI2 attempts to combine human input with machine learning through an iterative process. The platform uses multiple autonomous-learning approaches to identify potential attacks, then shows the most likely hits to information security analysts for further analysis. The analysts’ decisions are then fed back into the algorithm, allowing it to refine its decision-making process. Because it is constantly refining its criteria based on human input, the system is able to continually improve its detection methodology. As a result, false positives are kept to a minimum.
“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” Nitesh Chawla, a computer science professor at the University of Notre Dame, said in MIT's blog post. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”
| }
|