Posts Tagged 'tsa'

Behavioral Detection

There was an interesting incident last week, where a TSA behavioral detection officer detected someone who turned out to have bomb-making equipment with which he was planning to fly.

Although there has been some media coverage, it was probably news to most people that TSA used behavioral detection. There are staff at a few US airports, with plans to roll out the program in more airports quite rapidly. As a program, it has been successful — about 1% of stops seem to produce arrests, which is actually a pretty good rate. Of course, almost all of these arrests are for crimes unrelated to air travel, which raises some difficult issues that will exercise the minds of those who are extremely keen on privacy (but note the existence of a Terry stop in the US which seems to me to more-or-less cover this case).

I’ve decided that when people use the word ‘profiling’ what they mean is predictive modelling based on arbitrary or intuitively derived attributes and models.

It isn’t appropriate to call this “behavioral profiling”. Its technical roots seem to come from two places. The first is Ekman’s work on microexpressions, which some people may be familiar with from Gladwell’s book, Blink. The idea is that all of us exhibit fleeting patterns of muscular use, primarily facial, that reveal parts of our underlying internal state. Some practice is evidently needed to learn to notice these in real time, but it can be done.

The second part is less public but seems to have been developed by the Israelis as a way to defend against suicide bombers. I’ve heard one talk by an Israeli about this (light on technical details), but they claim to have high success rates across a wide spectrum, for example including schoolboys.

In the conversation about the recent incident, many connections were drawn with air security at Ben-Gurion, the main airport in Israel. It’s not entirely fair to draw such comparisons. Ben-Gurion uses a defence in depth where, as I understand it, data analysis begins when you order your taxi to go to the airport (as well as the obvious analysis of flight manifests).

Designing a passenger screening system

From the last two posts, it’s clear that a robust passenger screening system is actually a two-stage process. The no-fly list acts as a first stage. Its role is to eliminate some people from consideration (by self selection, because they think they may be on the list); and to cause others to take steps to conceal themselves or manipulate the process.

The second stage is a risk-management system whose primary target is people who have not yet been detected as terrorists, and who know that they haven’t, and amateurs. This is still potentially a large group of people, since there’s lots of grass roots terrorism about.

In designing this second stage, it’s important to think about what the threat model actually is. Actually the problem is harder than it looks, because it’s not possible to come up with every possible way in which a terrorist could cause an incident involving an aircraft. So the threat model has to play the odds, to some extent, and concentrate on likely threats.

As a general rule, it’s a good idea to build a model of normality, rather than a model of threat — so that everyone who doesn’t fit the model of normality is subject to extra scrutiny. This is hard on people who are, for whatever reason, different from the majority of others; but this is the price that has to be paid for broad-spectrum defence against many different kinds of attacks. (This is a civic problem, and suggests attention to models that try to distinguish ordinary abnormality from dangerous abnormality — which is possible, but not yet looked at very hard.)

The CAPS I system in the US was, mostly by accident, as much a normality model as a threat-based model. People were subjected to extra scrutiny if they met criteria that were loosely associated with terrorism: buying tickets for the front of the plane, paying cash, and buying one-way tickets. There’s some argument for these as relevant attributes; but it’s just as sensible to argue that these properties are quite rare and so the model is selecting for abnormality. One unfortunate side-effect of this model is that it also selects the wealthy, who found themselves facing extra screening often.

The other weakness of the CAPS I model was that the anticipated threat was too narrow, so the extra scrutiny was not broad enough. At the time, hijacking was considered a minor threat, while bombs in checked luggage were considered more likely (and more dangerous). As a result, several of the Sep 11th hijackers were flagged by the system and their checked luggage examined more carefully. Suicide bombing was not considered part of the threat.

The CAPS II program, short-lived, was more explcitly designed to model normality, with its concept of “rooted in the community”. The reason for checking commercial databases, as well as data directly related to the flight itself, was that someone who had an extensive credit record, owned a house, etc. could plausibly be considered a low risk — partly because the cost of creating such a track record would put it out of reach of many would-be terrorists.

This program raised many red flags with the public, partly because it was ill thought out, partly poorly presented, and partly because of mistakes made in other areas that led citizens not to trust government very much. (Note that the level of discomfort with businesses aggregating the same information is much lower.)

The Secure Flight program, the latest passenger screening program in the US, is a much simpler watchlist matching program. It also includes a built-in redress mechanism, and has been developed with extensive consultation. However, it is a much weaker program, relying on the completeness and integrity of the watch lists in use.

TSA discussion of Secure Flight

The requirement to use the system to detect criminals wanted for certain kinds of crimes has also been dropped.

What are no-fly lists for?

There’s a great deal of confusion in the discussion of airline security because several things are going on at once.

It’s sort of clear what the point of passenger screening is. The goal is to prevent anyone from carrying out an attack, either a hijacking or a bombing. The mechanism is to make sure that nobody can take the required tools or devices onto a flight.

Why is there an analysis component? Why not simply search everyone in exactly the same way, to make sure that they aren’t carrying anything they shouldn’t be? It’s a question of managing the costs and the risks. Analysing data about potential passengers allows them to be placed in categories. Each category consumes a different amount of resources to check; the amount is related to the perceived risk of people in that category.

Of course, there are advantages to not making these categories too rigid or predictable, as I discussed in the previous post.

This idea of risk management is a sensible one. But, given this framework, what is the point of a no-fly list? If I can be absolutely sure that a terrorist who gets on a plane does not have the ability to do anything different from the other passengers, is there any reason not to let him fly.

There are two reasons why a no-fly list might still be a good idea:

  1. It’s actually impossible to be sure that someone who gets on a plane cannot do anything destructive, no matter how much time is spent on checking beforehand. Even the Israelis, who are no slouches when it comes to airline security, and who’ve been doing it a long time, cannot guarantee that someone is actually innocuous. It’s not clear what the issues are, but it seems at least possible to make a swallowable IED, or for a group to take on board objects that are individually innocuous, but together could make something nasty.
    As a matter of practicality, it’s also the case that people who are highly motivated to carry out an attack are not worth the resources to screen completely. In other words, the point of passenger screening is to act as a safety net, catching people who aren’t known to be terrorists, either terrorists not yet discovered, or people who’ve suddenly snapped.
  2. A no-fly list puts barriers in the way of terrorists, making it hard for them to move and meet. This only makes sense for domestic travel, since borders already perform this function for international travel. In other words, once you become a terrorist you cut yourself off from moving freely around a country, because you can no longer use air transportation. Obviously, this matters more in large countries, and those with lots of islands or other physical barriers, than in small countries like those in Europe.

The biggest worry, and problem, with no-fly lists is their false positive rate, that is the number of innocent people who are mistakenly identified as terrorists and prevented from flying. This is partly a self-inflicted problem by governments which rushed to implement no-fly lists for largely cosmetic reasons, without taking the time to think about and implement reliable lists to begin with.

However, constructing and maintaining such lists is difficult because it requires making secret information widely available, which is not the way to keep it secret. (The list has to be at least partly secret because covert means might have been used to put some people on it; if they knew they were on it, they might be able to work out how.) There are technical ways to check whether someone is on a list without having to make the list public, using encryption techniques, but this idea has not, afaik, been used.

The second big problem with no-fly lists is that they are lists of identities, and it’s quite hard to robustly establish the identity of the person standing in front of you, if they are motivated not to make it easy. That’s why there’s such high levels of interest in biometrics — they provide a way to link some property of your physical presence to other data about you, and so to establish your identity. As a result, identity theft is big business — according to Bruce Schneier, a bigger business than drugs in the U.S.

Governments have also piggybacked law enforcement onto no-fly lists, so that people who are wanted for crimes of sufficient seriousness can be added to such lists. Whether or not this is a good idea is a complex subject; but it does make ordinary citizens suspicious about how much mission creep is a factor in government programs aimed at detecting and preventing terrorism. More another time, perhaps.

Airline passenger screening

The so-called Carnival Booth algorithm shows the weakness of airline (and other) passenger screening.

Suppose that I’m a bad guy and I want to carry out an attack. I start with 100 possible attackers. I send them on flights around the country, in which they behave completely innocently. Anyone who ever gets pulled over for extra screening is removed from the pool.

Eventually my pool of 100 is reduced to a much smaller number, but I can have high confidence that the remaining members will not receive any extra screening when they travel. And now I can use them to carry out an attack, with high probability that they will not be given special consideration. (Of course, they will still receive normal screening, so my attack will have to involve mechanisms that are not normally detected.)

This is why randomization helps — if the criteria for extra screening are randomized slightly from hour to hour, or perhaps some people are selected completely at random, then I can never be sure that someone who hasn’t been selected yet, will not be selected next time, when they may be travelling less innocently.

The corollary to randomization is that some people who are “obviously” innocent will sometimes be given extra screening. People tend to see this as silliness or a waste of resources — but is is actually a good use of resources. This case needs to be explained to the public more clearly.

The Carnival Booth algorithm was first explained by some students from MIT.