Predictive Correlation — The Future of Cyber Security?

What is Predictive Correlation?

Research funded by the National Science Foundation has led to the development of a proprietary inter-domain correlation algorithm that is mathematically similar to Google’s Page Rank algorithm. Event scores are autonomously obtained from a global network of honeypot sensors monitored by the MetaFlows Security System (MSS). The honeypots are virtual machines that masquerade as victims. They open up dangerous ports/applications and/or browse dangerous websites. As the honeypots are repeatedly infected, the MSS records both successful and unsuccessful hacker URLs, files, bad ports, and bad services. When a honeypot has a security event that triggers a false positive, the alerts for those events are ranked negatively, thus providing insight into events that should be routinely ignored or turned off. Security events that trigger true positives are ranked positively, thus improving their visibility. This information is then propagated in real time to each of our subscribers’ sensors in the system to augment traditional correlation techniques. This additional inter-domain correlation is important because it adds operational awareness based on real-time intelligence.

How does it work?

As shown in the figure below, honeypots work behind the scenes, continuously mining global relevance data and flow intelligence (IP reputation) for threats that penetrate differing degrees of cyber-defenses on different types of systems. After this step, annotated data from all network sensors (whether the sensors are honeypots or not) are compared and events are correlated with an algorithm similar to Google’s Page Rank algorithm: (X = bs + aW*X).

Diagram of MetaFlows event correlation system
Figure 1: Predictive Global Correlation

This process is designed to provide subscribers with intelligence data that takes into account the similarities and differences between the sources of the data. For space limitations we cannot explain the math and why it makes sense; however our system builds on the work described in “Highly Predictive Blacklisting” by Jian Zhang, Phillip Porras, Johannes Ullrich, SRI International and the SANS Institute in Usenix Security, August 2008 (we highly recommend that you read this article).

So What?

As a result of the algorithm, once a piece of intelligence reaches our system it is not equally distributed to all customers. Instead, it is mathematically weighted and routed to where it is most relevant, just as the first few web pages of a Google search yield the most relevant information for a particular search.

In addition to real-time intelligence on true positive security events (positive ranking), our system also provides information on security alerts that are irrelevant by demoting them and reducing false positive clutter. In other words, this system can propagate known false positives and known true positives among sensors using a mathematical model that maximizes prediction.

Graph of prediction power for MetaFlows ranking algorithm

The graph above quantifies the prediction power of the ranking algorithm. The experiment was carried out on the Snort event relevance data gathered between February 7th, 2010 and February 22nd, 2010. At the start of each day we performed the ranking operation over the previous day’s Snort event data and compared the predicted ranking values with the actual events gathered during that day from the sensors and honeypots. The simple prediction (blue line) is based on predicting that, for each sensor, the same event ranking is carried over from the previous day without running the algorithm (this is what people normally do today).

The Y axis is the hit ratio. The “hit ratio” is defined as the number of times the prediction matches the outcome in terms of the sign (positive or negative), divided by the number of non-zero rankings predicted.

  • We increment the hit counter if the prediction and the outcome have both positive rankings.
  • We increment the hit counter if the prediction and the outcome have both negative rankings.
  • We decrement the hit counter if the prediction and the outcome have opposite signs.

The figure shows that the ranking prediction (orange line) is strictly superior to simple prediction by 141% to 350% (depending on the day). This might not seem too impressive on the surface but if you dig a little deeper this is what it means:

  • Assuming 5 minutes of human analysis time per incident, a system with no ranking would give you a hit rate that finds 1 actionable item for every 20-30 incident investigations (or 0.4 incidents per analyst hour).
  • A system with predictive ranking would let you find 1 actionable item every 6-7 incidents investigations (2 incidents per analyst hour).

You can do the math in terms of cost savings: it’s huge! Most of the cost of network security systems is not the appliance or the software, but rather wasted analyst time!

You Should Not Just Take our Word for it!

The cyber-security arena is packed with technologies that claim they have the best solutions. That is why we encourage users to take the time to evaluate our predictive correlation and run it side-by-side with existing solutions. The outcome is always surprisingly good.

Leave a Reply

Your email address will not be published. Required fields are marked *