Anomaly: something that deviates from what is standard, normal, or expected.
In an enterprise’s Security Operations Center (SOC), considerable resources are expended on systems and processes that can identify anomalies: whether they’re in entity (machine) or human behavior. Why? Because an anomaly may be an indicator of compromise that needs investigating.
To ensure investigations are focused on the genuinely suspicious issues and avoid a bombardment of false positives, it’s vital to have a nuanced understanding of what “normal” looks like for each individual user and entity. And what qualifies as normal may change from time to time, even for the same user.
For example, an employee working in customer service may typically access Help Desk files or customer account data – but only within a few minutes of a customer’s phone call. To do so on their lunchbreak, or after hours, could be considered suspicious and worthy of follow-up.
Or it may not.
Such behavior could also be entirely regular for a particular employee and, after an initial investigation, found to be for a valid, harmless reason. Or, the behavior could be new and different for a particular employee, but very consistent with the behavior of peers reaching their level of tenure. In either case, an alert generated off such an event might nearly always prove to be a false positive.
So here lies the challenge: to develop a detailed understanding of what is normal for any given user or entity based not just on simple rules, but an evolving model of individual context.
A large bank, for example, can operate with thousands of applications and tens of thousands of employees. While each of those individual employees behaves in their own idiosyncratic way, there are typically patterns that can be identified around the team they operate in, or the employment stage they’re at. Data science techniques, such as machine learning, can be applied to build profiles for such people in different departments, or responsible for certain processes, to understand their “normal” behavior in different circumstances.
Effective anomaly detection entails combining the expected behavior of teams of people or types of machines with the past behavior and current context of an individual person or entity. Doing this in real-time, based on potentially hundreds of millions events per day, and petabytes of historical data for reference, is non-trivial. But it is doable.
At LigaData, we have worked with large banks to solve these threat detection and response problems. By applying the most advance continuous decisioning and machine learning technologies and techniques, our products help Compliance, Regulatory and Security teams focus on identifying high-probability, high-risk issues without drowning in false positives.
If you’d like to talk to us about how LigaData can help safeguard your organization, please contact us at email@example.com.
And if you’d like to learn more about how our use of extended lambda architecture enables us to perform complex continuous decisioning at great speed, please download our white paper.
This article was first published by LigaData on 15 February 2017, and reviewed for relevancy in 2018. We think it still has validity.