Posts Tagged 'Armor'

Game theory in adversarial settings

In an adversarial setting, the good guys want to take the best actions they can to thwart the bad guys — but often the bad guys can see what the choices are, and what the incentive structure looks like, and so they can respond in the way that ways it hardest for the good guys. This very quickly descends into “I know that he knows that I know that he knows … and so I’ll do X”. Finding the “best” solution in situations like this is what game theory is all about. In many situations, this determines what each player should do overall.

But is there a way to beat the deterministic kind of behaviour that this produces, especially when the game might only be played once and it really, really matters whether you win or not? One general approach is to use randomness. I’ve advocated using randomness to change boundaries by a little bit, for example in deciding who gets extra scrutiny at airports. This prevents probing attacks where someone can test the system repeatedly when innocent to see whether they get selected, and if they don’t they can do something bad. Adding randomization means that having never been selected when innocent doesn’t mean that you might not be selected when carrying out an attack — for which you only have one chance.

This idea has been used in a more formal way by the ARMOR project, which uses an algorithmic approach that introduces exactly the right amount of randomness to get the best win with the least loss. The technology has been deployed successfully to determine patrol schedules at LAX, to decide on the timing of random searches at Customs, and to schedule air marshals onto flights. A web site to get started is here.

The technology is mathematically complex, but it shows yet another way in which non-intuitive performance is possible using clever ideas.