Posts Tagged 'privacy'

“But I don’t have anything to hide” Part III

I haven’t been able to verify it, but Marc Goodman mentions (in an interview with Tim Ferriss) that the Mumbai terrorists searched the online records of hostages when they were deciding who to kill. Another reason not to be profligate about what you post on social media.

Come back King Canute, all is forgiven

You will remember that King Canute held a demonstration in which he showed his courtiers that he did not have the power to hold back the tide.

Senior officials in Washington desperately need courtiers who will show them, with equal force, that encryption has the same sort of property. If it’s done right, encrypted material can’t be decrypted by fiat. And any backdoor to the encryption process can’t be made available only to the good guys.

The current story about Apple and the encrypted phone used by one of the San Bernadino terrorists is not helping to make this issue any clearer to government, largely because the media coverage is so muddled that nobody could be blamed for missing the point.

The basic facts seem to be these: the phone is encrypted, the FBI have been trying to get in to it for some time, and there’s no way for anyone, Apple included, to burn through the encryption without the password. This is all as it was designed to be.

The FBI is now asking Apple to alter the access control software so that, for example, the ten-try limit on password guesses is disabled. Apple is refusing on two grounds. First, this amounts to the government compelling them to construct something, a form of conscription that is illegal (presumably the FBI could contract with Apple to build the required software but presumably Apple has no appetite for this).

Second, Apple argues that the existence proof of such a construct would make it impossible for them to resist the same request from other governments, where the intent might be less benign. This is an interesting argument. On the one hand, if they can build it now, they can build it then, and nobody’s claiming that the required construct is impossible. On the other hand, there’s no question that being able to do something in the abstract is psychologically quite different from having done it.

But it does seem as if Apple is using its refusal as a marketing tool for its high-mindedness and pro-privacy stance. Public opinion might have an effect if only the public could work out what the issues are — but the media have such a tenuous grasp that every story I saw today guaranteed greater levels of confusion.

“But I don’t have anything to hide” Part II

In an earlier post, I pointed out that differential pricing — the ability of businesses to charge different people different prices — is one of the killer apps of data analytics. Since the goal of businesses will be to charge everyone the most they’re willing to pay, there are strong reasons why we might not want to be modelled this way, even if “we have nothing to hide”

There’s a second area in which models of everyone might feel like a bad thing — healthcare. As the costs of healthcare rise, there will be strong arguments that individuals need to be participants in maintaining their health. This sounds good — but when big data collection means that a health case provider might charge more to someone who has been smoking (plausible case), overeating (hmm), not exercising enough (hmmm), or any of a number of other lifestyle choices, it begins to be uncomfortable, even if “we have nothing to hide”.

The point of much data analytics by organisations and states is to assess the cost/benefit ratio of interacting with you. Of course, humans (and their organisations) have always made such assessments. What is new is the ability to do so in a fine-grained, pervasive, and never-removable way. What will be lost is the possibility of a fresh start, of redemption, or even of convenient amnesia about aspects of the past.

“But I don’t have anything to hide”

This is the common response of many ordinary people when the discussion of (especially) government surveillance programs comes up. And they’re right, up to a point. In a perfect world, innocent people have nothing to fear from government.

The bigger problem, in fact, comes from the data collected and the models built by multinational businesses. Everyone has something to hide from them: the bottom line prices we are willing to pay.

We have not yet quite reached the world of differential pricing. We’ve become accustomed to the idea that the person sitting next to us on a plane may have paid (much) less for the identical travel experience, but we haven’t quite become reconciled to the idea that an online retailer might be charging us more for the same product than they charge other people, let alone that the chocolate bar at the corner store might be more expensive for us. If anything, we’re inclined to think that an organisation that has lots of data about us and has built a detailed model of us might give us a better price.

But it doesn’t require too much prescience to see that this isn’t always going to be the case. The seller’s slogan has always been “all the market can bear”.

Any commercial organization, under the name of customer relationship management, is building a model of your predicted net future value. Their actions towards you are driven by how large this is. Any benefits and discounts you get now are based on the expectation that, over the long haul, they will reap the converse benefits and more. It’s inherently an adversarial relationship.

Now think about the impact of data collection and modelling, especially with the realization that everything collected is there for ever. There’s no possibility of an economic fresh start, no bankruptcy of models that will wipe the slate clean and let you start again.

Negotiation relies on the property that each party holds back their actual bottom line. In a world where your bottom line is probably better known to the entity you’re negotiating with than it is to you, can you ever win? Or even win-win? Now tell me that you have nothing to hide.

[And, in the ongoing discussion of post-Snowden government surveillance, there’s still this enormous blind spot about the fact that multinational businesses collect electronic communication, content and metadata; location; every action on portable devices and some laptops; complete browsing and search histories; and audio around any of these devices. And they’re processing it all extremely hard.]

Blurring Identity

As I posted a few days ago, if we can’t avoid having data about ourselves, our actions, and our identities collected, then one plausible strategy is to create artificial data to supplement the real data. This makes it harder to find the real identity inside the larger blurred identity.
And now there’s a tool to help: Please Dont Stalk Me which allows twitter users to make it look as it their tweets come from anywhere in the world. Here’s the web site:


Fake Identities

Sixty years ago, all that was required to create a fake identity was to be able to forge documents — think ‘Allo ‘Allo. However, such identities didn’t stand up to much scrutiny since any check with a central location discovered the forgery. So intelligence organisations during the Cold War had to create deeper fake identities by inserting false records into government archives, or by taking over the identities of other people who weren’t using them. The nature of the process meant that they had to be developed ahead of when they would be needed, because there was no practical way to retro-insert the necessary documents.

Fast forward to today and have pity on the intelligence organisation employees whose job it is to keep many cover identities functioning in case they’re needed one day: posting on Facebook and LinkedIn, making comments on other peoples’ posts, and generally simulating a real person against the day when that record of existence might be needed as background for someone to assume the identity.

They have some new problems. First, it’s not enough to create tokens of identity; they have to involve activities, and so the work is constant. Second, everything they do is (potentially) recorded so that any mistake is captured for ever. Third, and most importantly, we don’t know enough about how real people behave online to fake it successfully. For example, suppose I want to post on a social network site about the median number of times for someone with my demographics. I usually don’t have enough information to be able to guess what that median is with any reliability. Even if I do, there are deeper patterns to such postings, for example the distribution by time of day that I really should try to mimic but practically can’t. If I’m an intelligence professional, I may unwittingly post during the work day when the identity I am simulating would more plausibly post in the evening. And if I try to construct a realistic social network around myself I have even more difficulty knowing what it “should” look like, let alone making it happen.

For a fun example of this process and its pitfalls, search for “Robin Sage” at your favorite search engine.

The flipside of the difficulty of faking an online identity is that my online presence over time becomes a better guarantee of my identity than that provided by governments. Because I’ve had a web page for a long time, and its been captured at unpredictable moments by the Wayback Engine, it provides quite a strong basis for my identity.