Author: Kate Adam, E8 Security
The traditional way of thinking about cyber threats is that something either is or isn’t one. When people think “threat detection,” they assume your findings will detect something either inherently malicious or a false positive.
We like to think of threats as very black-or-white things, including their anatomy, or what we life to call indicators of compromise (IOCs). The concept that files, links, specific processes, and IP addresses as what directly makeup attacks is one we’re all used to and comfortable with. These IOCs are tangible information that provide us with certainty, so we know exactly what we’re looking for and how we can protect our networks in the future — often with a signature or policy. It’s then just a matter of setting up the right policies and controls, and making sure we’re subscribed to the latest threat feeds.
However, what happens before something is identified as a threat? What happens when a threat isn’t one that a matching function can detect? Maybe it’s someone who looks like or is a legitimate insider with legitimate access — what do you do when you don’t know what you’re looking for?
If we want to improve our effectiveness in protecting our networks from all manner of threats — known and unknown — we need to adjust our concept of a threat to include the shades of gray that encompass the underlying causes not yet known to security personnel or the world at large, as described by anomalous and suspicious activity.
Netflix & chill, and think of analogies for cybersecurity
I’ve recently been binge-watching older seasons of the show “Bones.” It’s a crime-solving show that’s been around since before the whole “next-generation” technology naming thing began, where the FBI and forensic anthropologists solve murders by looking at a victim’s bones. By the end of the episode, Dr. Temperance Brennan can tell Agent Seely Booth exactly where and how the crime took place scene-by-scene, the characteristics of the attacker (height, weight, and any physiological maladies they have), and what the murder weapon was, just by looking at marks on the bones.
By studying the bone abnormalities, Dr. Brennan can detect the presence of malicious actions the victim underwent — though sometimes those actions are the result of an accident rather than a murder — and then use inductive reasoning to piece together and present the evidence needed to apprehend and convict the perpetrator, or exonerate the wrongfully accused.
This is analogous for how we as an industry should be defining IOCs and threats to the business. The bones are the digital assets that make up our networks, and the abnormalities are out-of-the-ordinary network, endpoint, and user behaviors that might be the result of a malicious actor, or might be legitimate personnel doing something risky but well-intentioned. Put another way, instead of trying to find the needle — because it may not even be a needle — we need to start with what’s odd about haystack.
A different approach to threats and threat hunting
If we can’t identify threats using aspects of their makeup, we can shift our hunting and investigation strategy to focus on the effects that threats have on the resources around them. The point is that, until the causes behind them are identified, these potential behavioral indicators should be considered threats. At a minimum, if you find after investigation that the behaviors are not related to an external attack or malicious insider, they represent things your security team should know about — knowing how groups and individuals within your organization are behaving can inform decisions that reduce your overall risk.