Posts Tagged 'Customs'

Provenance — the most important ignored concept

I’ve been thinking lately about the problems faced by Customs and Border Protection pieces of government. The essential problem they face is: when an object (possibly a person) arrives at a border, they must decide whether or not to allow that object to pass, and whether to charge for the privilege.

The basis for this decision is never the honest face of the traveller or the nicely wrapped package. Rather, all customs organisations gather extra information to use. At one time, this was a document of some sort accompanying the object, and itself certified by someone trustworthy; increasingly it is data collected about the object before it arrived at the border, and also the social or physical network in which it is embedded. For humans, this might be data provided on a visa application, travel payment details, watch lists, and social network activity. For physical objects, this might be shipper and destination properties and history.

In other words, customs wants to know the provenance of every object, in as much detail as possible, and going back as far as possible. All of the existing analytic techniques in this space are approximating that provenance data and how it can be leveraged.

Other parts of government, for example law enforcement and intelligence, are also interested in provenance. For example, sleeper agents are those embedded in another country with a false provenance. The use of aliases is an attempt to break chains of provenance.

Ordinary citizens are wary of this kind of government data collection, feeling that society rightly depends on the existence of two mechanisms for deleting parts of our personal history: ignoring non-consequential parts (youthful hijinks, sealed criminal records) at least after they disappear far enough into the past; and forgiving other parts. There’s a widespread acceptance that people can change and it’s too restrictive to make this impossible.

The way we handle this today is that we explicitly delete certain information from the provenance, but make decisions as if the provenance were complete. This works all of the way from government level to the level of a marriage.

There is another way to handle it, though. That is to keep the provenance complete, but make decisions in ways that acknowledge explicitly that using some data is less appropriate. The trouble is that nobody trusts organisations to make decisions in this second way.

There’s an obvious link between blockchains and provenance, but the whole point of a blockchain is to make erasure difficult. So they will really only have a role to play in this story if we, collectively, adopt the second mechanism.

The issues become more difficult if we consider the way in which businesses use our personal information. Governments have a responsibility to implement social decisions and so are motivated to treat individual data in socially approved ways. Businesses have no such constraint — indeed they have a fiduciary responsibility to leverage data as much as they can. A business that knows data about me has no motivation to delete or ignore some of it.

This is the dark side of provenance. As I’ve discussed before, the ultimate goal of online businesses is to build models of everyone on the planet and how much they’re willing to pay for every product. In turn, sales businesses can use these models to apply differential pricing to their customers, and so increase their profits.

This sets up an adversarial situation, where consumers are trying to look like their acceptable price is much lower than it is. Having a complete, unalterable personal provenance makes this much, much more difficult. But by the time most consumers realise this, it will be too late.

In all of these settings, the data collected about us is becoming so all-encompassing and permanent that it is going to change the way society works. Provenance is an under-discussed topic, but it’s becoming a central one.

Advertisements

If you see something, say something — and we’ll ignore it

I arrived on a late evening flight at a Canadian airport that will remain nameless, and I was the second person into an otherwise deserted Customs Hall. On a chair was a cloth shoulder bag and a 10″ by 10″ by 4″ opaque plastic container. Being a good citizen, I went over to the distant Customs officers on duty and told them about it. They did absolutely nothing.

There are lessons here about predictive modelling in adversarial settings. The Customs officers were using, in their minds, a Bayesion predictor, which is the way that we, as humans, make many of our predictions. In this Bayesian predictor, the prior that the ownerless items contained explosives was very small, so the overall probability that they should act was also very small — and so they didn’t act.

Compare this to the predictive model used by firefighters. When a fire alarm goes off, they don’t consider a prior at all. That is, they don’t consider factors such as: a lot of new students just arrived in town, we just answered a hoax call to this location an hour ago, or anything else of the same kind. They respond regardless of whether they consider it a ‘real’ fire or not.

The challenge is how to train front-line defenders against acts of terror to use the firefighter predictive model rather than the Bayesian one. Clearly, there’s still some distance to go.


Advertisements