Low Hanging Fruit in Cybersecurity III

Any attempt to decide whether a particular action is “bad” or “good” requires some model of what “good” actually means. The only basis for intelligent action in almost any setting is to be able to have a plan for the expected, but also a mechanism for noticing the unexpected — to which some kind of meta-planning can be attached. This is, of course, a crucial part of how we function as humans; we don’t hang as software often does, because if we encounter the unexpected, we do something about it. (Indeed, an argument along this line has been used by J.R. Lucas to argue that the human mind is not a Turing machine.)

But most cybersecurity applications do not try (much) to build a model of what “good” or “expected” or “normal” should be like. Granted, this can be difficult; but I can’t help but think that often it’s not as difficult as it looks at first. Partly this is because of the statistical distribution that I discussed in my last post — although, on the internet, lots of things could happen, most of them are extremely unlikely. It may be too draconian to disallow them, but it seems right to be suspicious of them.

Actually, three different kinds of models of what should happen are needed. These are:

  1. A model of what “normal” input should look like. For example, for an intrusion detection system, this might be IP addresses and port numbers; for a user-behavioral system, this might be executables and times of day.
  2. A  model of what “normal” transformations look like. Inputs arriving in the system lead to consequent actions. There should be a model of how these downstream actions depend on the system inputs.
  3. A model of what “normal” rates of change look like. For example, I may go to a web site in a domain I’ve never visited before; but over the course of different time periods (minutes, hours, days) the rate at which I encounter brand new web sites exhibits characteristic patterns.

An exception to the first model shows that something new is happening in the “outside” world — it’s a signal of novelty. An exception to the second model shows that the system’s model of activity is not rich enough — it’s a signal of interestingness. An exception to the third model shows that the environment is changing.

Activity that does not fit with any one of these models should not necessarily cause the actions to be refused or to sound alarms — but it does provide a hook to which a meta-level of analysis can be attached, using more sophisticated models with new possibilities that are practical only because they don’t get invoked very often.

Again think of the human analogy. We spent a great deal of our time running on autopilot/habit. This saves us cognitive effort for things that don’t need much. But, when anything unusual happens, we can quickly snap into a new mode where we can make different kinds of decisions as needed. This isn’t a single two-level hierarchy — in driving, for example, we typically have quite a sophisticated set of layers of attention, and move quickly to more attentive states as conditions require.

Cybersecurity systems would, it seems to me, work much more effectively if they used the combination of models of expected/normal behavior, organized in hierarchies, as their building blocks.

Advertisements

0 Responses to “Low Hanging Fruit in Cybersecurity III”



  1. Leave a Comment

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: