How this AI-human partnership takes cybersecurity to a new level

A program designed by MIT to battle hackers is example of effective artificial intelligence and human collaboration.

South Korean professional Go player Lee Sedol puts the first stone against Google's artificial intelligence program, AlphaGo. AlphaGo beats Lee Sedol 4-1. MIT scientists are now working on AI programs that could detect cyberattacks.

AP Photo/Lee Jin-man

April 20, 2016

In the ongoing battle against cyber attacks, a man-machine collaboration could offer a new path to security.

To keep up with cyber threats, the cybersecurity industry has turned to assistance from unsupervised artificial intelligence systems that operate independently from human analysts.

But the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology in Cambridge, Mass., in partnership with the machine-learning startup PatternEx, is offering a fresh approach. Their new program, AI2 ,  draws on what humans and machines each do best: It allows human analysts to build upon the large scale pattern recognition and learning capabilities of artificial intelligence.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

The industry standard right now is unsupervised machine learning, CSAIL research scientist Kalyan Veeramachaneni, who helped develop the program, says in a phone interview with The Christian Science Monitor. 

Cybersecurity firms send out AI programs that autonomously identify patterns in data and flag any outliers as possible attacks.

"There's a recognition that the volume of data that has to be analyzed is going up exponentially," says Alan Brill, senior managing director of cybersecurity firm Kroll, in a phone interview with the Monitor. "But the human resources that are both trained and available to do this are not going up at that rate. And that starts to leave an analytic gap. That has to be handled by some form of machine intelligence that can look at the log files and transactions and the data and make some intelligent decisions."

The problem: "Every day the data changes, you change something on the website and the behaviors may change, and ultimately outliers are very subjective," Dr. Veeramachaneni says. 

With AI2 , the team of researchers asked this question: "Is it possible to have an AI flag a possible attack and then have an analyst report whether or not it was a threat?"

Howard University hoped to make history. Now it’s ready for a different role.

The result is a feedback loop that created a new partially supervised, unsupervised machine-learning program that has produced excellent results.

AI2  flags anomalies in the data like any other unsupervised program, but then reports a small sample of its findings to analysts. Analysts look for false positives, data that was incorrectly flagged as a threat, and tell the AI program. The feedback is plugged into the learning equation and refines AI2  for its next search.

In other words, as the AI does the laborious work of searching through millions of lines of data, the humans confirm and refine their findings, including labeling the type of attack it found as brute-force, trojan, etc. and identifying new combinations. 

Veeramachaneni's study, released Monday, shows that the MIT program now has an 85 percent detection rate and as low as a 5 percent false positive rate. The false positive rate is the impressive number, according to cybersecurity experts.

"If you use unsupervised learning alone, to get 85 percent detection it will end up with 20-25 percent false positive rate," Veeramachaneni says. "There are hundreds of thousands of events, if you’re showing analysts 25 percent, that is huge. People can say they have a program with 99 percent detection rate, but you have to ask how many false positives."

Why haven't other AI cybersecurity programs learned from the feedback? It's a common AI practice, but it has been extremely hard to adapt for cybersecurity.

For example, researchers have used feedback from people to help an AI program identify objects in images. A group of willing participants could look through millions of images and flag the ones that have lamps in them. That data set would then be used to help teach an AI program to identify lamps.

While simple objects are easily identifiable, it's harder to pick out a cyber attacks in lines of data or code. And the experts that can are already swamped with having to look through millions of lines.

The AI2  program developed a system that made teaching the program relatively easy. The program only presents a small portion of its findings, and those are continually refined down. On Day 1 it might present an analyst with 200 anomalies, but later it may only present 100.

Successfully finding a way to input such feedback opened up a more reliable and adaptable system for the fight against cyber attacks and could be a major boon for businesses.

"The number and sophistication of cyber attacks is a disruptor for traditional industries," says Brill at Kroll. What AI
does "is the kind of thing we need to keep up with the hackers," he adds.

For now, PatternEx is bringing the AI2  to Fortune 500 companies, but the hope is to have the program available for businesses of all sizes.

"As we build more and more of these around different companies, the model are transferable," Veeramachaneni says. "For a small company, that doesn’t have a budget for a security team, we could transfer the models from other companies for them."