Low Hanging Fruit in Cybersecurity II

If cybersecurity exists to stop bad things happening in computing systems, then it seems to me that there are several implicit assumptions that underlie many approaches and techniques that might not be completely helpful. These are:

  • The distinction between “good” (or “allowable”) and “bad” is a binary distinction;
  • The decision about this distinction has to be made monolithically in a single step;
  • The distribution of likely things that could happen is uniform (flat).

Even to write them explicitly shows that they can’t quite be right, but nevertheless I suspect they exist, unexamined, in the design of many security systems.

What happens if we remove these assumptions?

If the distinction between “good” and “bad” is not discrete, then our systems instead allocate some kind of continuous risk or suspicion to actions. This creates an interesting new possibility — the decision about what to do about an action can now be decoupled from how the action is categorized. This is not even a possibility if the only distinction we recognize is binary.

From a purely technical point of view, this means that many different kinds of risk measuring algorithms can be developed and used orthogonally to decisions about what the outputs of these algorithms means. Critical boundaries can be determined after the set of risks has been calculated, and may even be derived from the distribution of such risks. For example, bad things are (almost always) rare, so a list of actions ordered by risk will normally have a bulge of “normal” actions and then a small number of anomalous actions. The boundary could be placed at the edge of the bulge.

Second, what if the decision about whether to allow an action doesn’t have to be made all at once. Then systems can have defence in depth. The first, outer, layer can decide on the risk of a new action and decide whether or not to allow it. But it can be forgiving of potential risky actions if there are further layers of categorization and defence to follow. What it can do is to disallow the clearly and definitively bad things, reducing the number of potentially bad things that have to be considered at later stages.

From a technical point of view, this means that weaker but cheaper algorithms can be used on the front lines of defence, with more effective but more expensive algorithms available for later stages (where they work with less data, and so do not cost as much overall, despite being more expensive per instance).

Third, what if our defence took into account that the landscape of expected actions is not uniform, so that low probability events should automatically be treated as more suspicious. For example, spam filtering does lots of clever things, but it doesn’t build a model of the sources of my email, and flag emails from countries that I’ve never, ever received email from as inherently more likely to be spam. (Yes, I know that sender addresses can be spoofed.)

This idea has been used in behavioral profiling of computer activity, and it sort of works. But it needs to be combined with the ideas above, so that actions can be rated along a continuum from: routine (allow), to unusual but still not that unusual (allow, but maybe with a user question or at least logged for occasional inspection), to very unusual (user better explicitly allow), to bizarre (disallow). Windows has a weak version of this, which hasn’t been accepted well by users, but it flags only one thing (program start) and it doesn’t build a model of typical behavior by each user.

For example, the set of IP addresses with which my computer interacts is quite large, and hard to represent by some kind of convex structure, so intrusion detection doesn’t work very well if it depends on wrapping/categorising those IP addresses that are OK, and blocking traffic from those that are not. And usually the set of OK IP addresses is not derived from those I interact with, but encoded in some set of rules that apply to many computers. But if instead I built a model of the IP addresses I interact with, allowing older ones to get stale and disappear, and then looked at new IP addresses and allowed them if they resembled (tricky) those I already interact with, and asked me about the others, then this might work better than current approaches. An IP address is a hierarchical structure, with a possible country followed by the top octet, and so on, so I can discriminate quite finely about what it might mean. Even a web server that is theoretically visible to every other IP address could still benefit from handling unlikely source IP addresses differently.

OK, maybe this isn’t exactly low hanging fruit, but the ideas are straightforward and (IMHO) should be built into the design of more robust systems.

Advertisements

0 Responses to “Low Hanging Fruit in Cybersecurity II”



  1. Leave a Comment

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: