Posts Tagged 'probing'

Designing a passenger screening system

From the last two posts, it’s clear that a robust passenger screening system is actually a two-stage process. The no-fly list acts as a first stage. Its role is to eliminate some people from consideration (by self selection, because they think they may be on the list); and to cause others to take steps to conceal themselves or manipulate the process.

The second stage is a risk-management system whose primary target is people who have not yet been detected as terrorists, and who know that they haven’t, and amateurs. This is still potentially a large group of people, since there’s lots of grass roots terrorism about.

In designing this second stage, it’s important to think about what the threat model actually is. Actually the problem is harder than it looks, because it’s not possible to come up with every possible way in which a terrorist could cause an incident involving an aircraft. So the threat model has to play the odds, to some extent, and concentrate on likely threats.

As a general rule, it’s a good idea to build a model of normality, rather than a model of threat — so that everyone who doesn’t fit the model of normality is subject to extra scrutiny. This is hard on people who are, for whatever reason, different from the majority of others; but this is the price that has to be paid for broad-spectrum defence against many different kinds of attacks. (This is a civic problem, and suggests attention to models that try to distinguish ordinary abnormality from dangerous abnormality — which is possible, but not yet looked at very hard.)

The CAPS I system in the US was, mostly by accident, as much a normality model as a threat-based model. People were subjected to extra scrutiny if they met criteria that were loosely associated with terrorism: buying tickets for the front of the plane, paying cash, and buying one-way tickets. There’s some argument for these as relevant attributes; but it’s just as sensible to argue that these properties are quite rare and so the model is selecting for abnormality. One unfortunate side-effect of this model is that it also selects the wealthy, who found themselves facing extra screening often.

The other weakness of the CAPS I model was that the anticipated threat was too narrow, so the extra scrutiny was not broad enough. At the time, hijacking was considered a minor threat, while bombs in checked luggage were considered more likely (and more dangerous). As a result, several of the Sep 11th hijackers were flagged by the system and their checked luggage examined more carefully. Suicide bombing was not considered part of the threat.

The CAPS II program, short-lived, was more explcitly designed to model normality, with its concept of “rooted in the community”. The reason for checking commercial databases, as well as data directly related to the flight itself, was that someone who had an extensive credit record, owned a house, etc. could plausibly be considered a low risk — partly because the cost of creating such a track record would put it out of reach of many would-be terrorists.

This program raised many red flags with the public, partly because it was ill thought out, partly poorly presented, and partly because of mistakes made in other areas that led citizens not to trust government very much. (Note that the level of discomfort with businesses aggregating the same information is much lower.)

The Secure Flight program, the latest passenger screening program in the US, is a much simpler watchlist matching program. It also includes a built-in redress mechanism, and has been developed with extensive consultation. However, it is a much weaker program, relying on the completeness and integrity of the watch lists in use.

TSA discussion of Secure Flight

The requirement to use the system to detect criminals wanted for certain kinds of crimes has also been dropped.

Advertisements

Airline passenger screening

The so-called Carnival Booth algorithm shows the weakness of airline (and other) passenger screening.

Suppose that I’m a bad guy and I want to carry out an attack. I start with 100 possible attackers. I send them on flights around the country, in which they behave completely innocently. Anyone who ever gets pulled over for extra screening is removed from the pool.

Eventually my pool of 100 is reduced to a much smaller number, but I can have high confidence that the remaining members will not receive any extra screening when they travel. And now I can use them to carry out an attack, with high probability that they will not be given special consideration. (Of course, they will still receive normal screening, so my attack will have to involve mechanisms that are not normally detected.)

This is why randomization helps — if the criteria for extra screening are randomized slightly from hour to hour, or perhaps some people are selected completely at random, then I can never be sure that someone who hasn’t been selected yet, will not be selected next time, when they may be travelling less innocently.

The corollary to randomization is that some people who are “obviously” innocent will sometimes be given extra screening. People tend to see this as silliness or a waste of resources — but is is actually a good use of resources. This case needs to be explained to the public more clearly.

The Carnival Booth algorithm was first explained by some students from MIT.


Advertisements