Archive Page 2

Security theatre lives

Sydney tests its emergency notification system in the downtown core at the same time of day every time. So if a person wanted to cause an incident, guess what time they would choose?

It also seems to be done on Fridays, which is exactly the worst day to choose, since it’s the most common day for islamist incidents.

Security theatre = doing things that sound like they improve security without actually improving them (and sometimes making them worse).


“But I don’t have anything to hide” Part III

I haven’t been able to verify it, but Marc Goodman mentions (in an interview with Tim Ferriss) that the Mumbai terrorists searched the online records of hostages when they were deciding who to kill. Another reason not to be profligate about what you post on social media.

The growing role of data curation

My view of Data Science, or Big Data if you prefer, is that it divides naturally into three different subfields:

  1. Data curation, which involves focusing on the issues of managing large amounts of heterogeneous data, but is primarily concerned about provenance, that is tracking the metadata about the data.
  2. Computational science, which builds models of the real-world inside computer systems to study their properties.
  3. Analytics, which infers the properties of systems based on data about them.

I’ve posted about these ideas previously (,

Data curation might have seemed like the poor cousin among these three, and certainly gets the least funding and attention.

But issues of provenance have suddenly become mainstream as everyone on the web struggles to figure out what to do about fake news stories. So far, the Internet has not really addressed the issues of metadata. Most of the big content providers know who generated the content that they create and distribute, but they don’t necessarily make this information known or available for those who read the content to leverage. It’s time for the data curation experts, who tend to come from information systems and library science, to step up.

Data curation is also about to become the front line in cyberattack. As I’ve suggested (Skillicorn, DB, Leuprecht, C, and Tait, V. 2016. Beyond the Castle Model of Cybersecurity.  Government Information Quarterly.), a natural cyberdefence strategy is replication. Data exfiltration is made much more difficult if there many, superficially similar, versions of any document or data that might be a target. However, progress in assigning provenance becomes the cyberattack that matches this cyber defence.

So here’s the research question for data curation: how can I tell, from the internal evidence, and partial external evidence, whether this particular document is legitimate (or is the legitimate version of a set of almost-replicates)?

6.5/7 US presidential elections predicted from language use

I couldn’t do a formal analysis of Trump/Clinton language because Trump didn’t put his speeches online — indeed many of them weren’t scripted. But, as I posted recently, his language was clearly closer to our model of how to win elections than Clinton’s was.

So since 1992, the language model has correctly predicted the outcome, except for 2000 when the model predicted a very slight advantage for Gore over Bush (which is sort of what happened).

People judge candidates on who they seem to be as a person, a large part of which is transmitted by the language they use. Negative and demeaning statements obviously affect this, but so does positivity and optimism.

Voting is not rational choice

Pundits and the media continue to be puzzled by the popularity of Donald Trump. They point out that much of what he says isn’t true, that his plans lack content, that his comments about various subgroups are demeaning, and so on, and so on.

Underlying these plaintive comments is a fundamental misconception about how voters choose the candidate they will vote for. This has much more to do with standard human, in the first few seconds, judgements of character and personality than it does about calm, reasoned decision making.

Our analysis of previous presidential campaigns (about which I’ve posted earlier) makes it clear that this campaign is not fundamentally different in this respect. It’s always been the case that voters decide based on the person who appeals to them most on a deeper than rational level. As we discovered, the successful formula for winning is to be positive (Trump is good at this), not to be negative (Trump is poor at this), not to talk about policy (Trump is good at this), and not to talk about the opponent (Trump is poor at this). On the other hand, Hillary Clinton is poor at all four — she really, really believes in the rational voter.

We’ll see what happens in the election this week. But apart from the unusual facts of this presidential election, it’s easy to understand why Trump isn’t doing worse and Hillary Clinton isn’t doing better from the way they approach voters.

It’s not classified emails that are the problem

There’s been reporting that the email trove, belonging to Huma Abedin but found on the laptop of her ex-husband, got there as the result of automatic backups from her phone. This seems plausible; if it is true then it raises issues that go beyond whether any of the emails contain classified information or not.

First, it shows how difficult it is for ordinary people to understand, and realise, the consequences of their choices about configuring their life-containing devices. Backing up emails is good, but every user needs to understand what that means, and how potentially invasive it is.

Second, to work as a backup site, this laptop must have been Internet-facing and (apparently) unencrypted. That means that more than half a million email messages were readily accessible to any reasonably adept cybercriminal or nation-state. If there are indeed classified emails among them, then that’s a big problem.

But even if there are not, access to someone’s emails, given the existence of textual analytics tools, means that a rich picture can be built up of that individual: what they are thinking about, who they are communicating with (their ego network in the jargon), what the rhythm of their day is, where they are located physically, what their emotional state is like, and even how healthy they are.

For any of us, that kind of analysis would be quite invasive. But when the individual is a close confidante of the U.S. Secretary of State, and when many of the emails are from that same Secretary, the benefit of a picture of them at this level of detail is valuable, and could be exploited by an adversary.

Lawyers and the media gravitate to the classified information issue. This is a 20th Century view of the problems that revealing large amounts of personal text cause. The real issue is an order of magnitude more subtle, but also an order of magnitude more dangerous.

Biometrics are not the answer for authentication

I’ve pointed out before that biometrics are not a good path to follow to avoid the obvious and growing issues with authentication using passwords.

Many biometrics suffer from being easy to spoof: pictures of someone’s iris, appropriately embedded in a background, can fool iris readers, a sheet of clingfilm can often cause a fingerprint reader to ‘see’ the last real fingerprint used on it, and so on.

But there’s a more pervasive problem with biometrics. The fact that a biometric is something you are is, on the one hand, a positive because you don’t have to remember anything, and wherever you go, there you are.

But, on the other hand, a biometric cannot be changed, and this turns out to be a huge problem.

Suppose you go to authenticate using a biometric. The device that captures your biometric must convert it to something digital, and then compare that digital value to a previously recorded value associated with you.

There are two problems:

  1. For a while, the device has your biometric data as plaintext. It may be encrypted very close to the place where it is captured, but there is a gap, and the unencrypted version can potentially be grabbed in the gap. There is always a temptation/pressure to use low-power sensors for capture, and they may not be able to handle the encryption.
  2. The previously recorded values must be kept somewhere. If this location can be hacked, then the encrypted versions of the biometric can be copied. These encrypted versions can then be used for replay attacks.

Of course, there are defences. But, for example, if e-passports are to be used to enter multiple countries, then they must use the same repertoire of encryption techniques so that passports from multiple countries can be read by the same system. So it’s not enough to say that different encryptions of biometric plaintext to its encrypted versions will prevent these issues.

And if one person’s encrypted biometric is stolen, there’s no practical way to update the system’s that rely on it (since they must continue to use the same mapping so that everyone else’s biometrics will still work). More importantly, there’s no way to issue a fresh identity for the person whose data was stolen (“Go and have plastic surgery so that we can restore your use of facial recognition”).