Posts Tagged 'Canada'

The cybercrime landscape in Canada

Statscan recently released the results of their survey of cybercrime and cybersecurity in 2017 (

Here are some the highlights:

  • About 20% of Canadian businesses had a cybersecurity incident (that they noticed). Of these around 40% had no detectable motive, another 40% were aimed at financial gain, and around 23% were aimed at getting information.
  • More than half had enough of an impact to prevent the business operating for at least a day.
  • Rates of incidents were much higher in the banking sector and pipeline transportation sector (worrying), and in universities (not unexpected, given their need to operate openly).
  • About a quarter of businesses don’t use an anti-malware tool, about a quarter do not have email security (not clear what this means, but presumably antivirus scanning of incoming email, and maybe exfiltration protection), and almost a third do not have network security. These are terrifying numbers.

Relatively few businesses have a policy re managing and reporting cybersecurity incidents; vanishingly few have senior executive involvement in cybersecurity.

It could be worse, but this must be disappointing to those in the Canadian government who’ve been developing and pushing out cyber awareness.

Canadian election 2015: Leaders’ debate

Regular readers will recall that I’m interested in elections as examples of the language and strategy of influence — what we learn can be applied to understanding jihadist propaganda.

The Canadian election has begun, and last night was the first English-language debate by the four party leaders: Stephen Harper, Elizabeth May, Thomas Mulcair, and Justin Trudeau. Party leaders do not get elected directly, so all four participants had trouble wrapping their minds around whether they were speaking as party spokespeople or as “presidential” candidates.

Deception is a critical part of election campaigns, but not in the way that people tend to think. Politicians make factual misstatements all the time, but it seems that voters have already baked this in to their assessments, and so candidates pay no penalty when they are caught making such statements. This is annoying to the media outlets that use fact checking to discover and point out factual misstatements, because nobody cares, and they can’t figure out why.

Politicians also try to present themselves as smarter, wiser, and generally more qualified for the position for which they’re running, and this is a much more important kind of deception. In a fundamental sense, this is what an election campaign is — a Great White Lie. Empirically, the candidate who is best at this kind of persona deception tends to win.

Therefore, measuring levels of deception is a good predictor of the outcome of an election. Recall that deception in text is signalled by (a) reduced use of first-person singular pronouns, (b) reduced use of so-called exclusive words (“but”, “or”) that introduce extra complexity, (c) increased use of action verbs, and (d) increased use of negative-emotion words. This model can be applied by counting the number of occurrences of these words, adding them up (with appropriate signs), and computing a score for each document. But it turns out to be much more effective to add a step that weights each word by how much it varies in the set of documents being considered, and computing this weighted score.

So, I’ve taken the statements by each of the four candidates last night, and put them together into four documents. Then I’ve applied this deception model to these four documents, and ranked the candidates by levels of deceptiveness (in this socially acceptable election-campaign meaning of deceptiveness).

wordseffectsThis figure shows, in the columns, the intensity of the 35 model words that were actually used, in decreasing frequency order. The rows are the four leaders in alphabetical order: Harper, May, Mulcair, Trudeau; and the colours are the intensity of the use of each word by each leader. The top few words are: I, but, going, go, look, take, my, me, taking, or. But remember, a large positive value means a strong contribution of this word to deception, not necessarily a high frequency — so the brown bar in column 1 of May’s row indicates a strong contribution coming from the word “I”, which actually corresponds to low rates of “I”.

deceptdocsThis figure shows a plot of the variation among the four leaders. The line is oriented from most deceptive to least deceptive; so deception increases from the upper right to the lower left.

Individuals appear in different places because of different patterns of word use. Each leader’s point can be projected onto this line to generate a (relative) deception score.

May appears at the most deceptive end of the spectrum. Trudeau and Harper appear at almost the same level, and Mulcair appears significantly lower. The black point represents an artificial document in which each word of the model is used at one standard deviation above neutral, so it represents a document that is quite deceptive.

You might conclude from this that May managed much higher levels of persona deception than the other candidates and so is destined to win. There are two reasons why her levels are high: she said much less than the other candidates and her results are distorted by the necessary normalizations; and she used “I” many fewer times than the others. Her interactions were often short as well, reducing the opportunities for some kinds of words to be used at all, notably the exclusive words.

Mulcair’s levels are relatively low because he took a couple of opportunities to talk autobiographically. This seems intutively to be a good strategy — appeal to voters with a human face — but unfortunately it tends not to work well. To say “I will implement a wonderful plan” invites the hearer to disbelieve that the speaker actually can; saying instead “We will implement a wonderful plan” makes the hearer’s disbelief harder because they have to eliminate more possibilities’ and saying “A wonderful plan will be implemented” makes it a bit harder still.

It’s hard to draw strong conclusions in the Canadian setting because elections aren’t as much about personalities. But it looks as if this leaders’ debate might have been a wash, with perhaps a slight downward nudge for Mulcair.

Canada’s Anti Spam — its one good feature spoiled

I commented earlier that the new Canadian Anti Spam law and Spam Reporting Centre were a complete waste of money because:

1. Spam is no longer a consumer problem, but a network problem which this legislation won’t help.
2. Most spammers are beyond the reach of Canadian law enforcement, even if attribution could be cleanly done.
3. There’s an obvious countermeasure for spammers — send lots of spam to the Spam Reporting Centre and pollute the data.

There was one (unintended) good feature, however. Legitimate businesses who knew my email address and therefore assumed, as businesses do, that I would love to get email from them about every imaginable topic, have felt obliged to ask for my permission to keep doing so. (They aren’t getting it!)

BUT all of these emails contain a link of the form “click here to continue getting our (um) marketing material”, because they’ve realised that nobody’s going to bother with a more cumbersome opt-in mechanism.

Guess what? Spear phishing attacks have been developed to piggyback on this flood of permission emails — I’ve had a number already this week. Since they appear to come from mainstream organisations and the emails look just like theirs, I’m sure they’re getting lots of fresh malware downloaded. So look out for even more botnets based in Canada. And thanks again, Government of Canada for making all of this possible.

Spam Reporting Centre

The Canadian government has decided to create a spam reporting centre (aka ‘The Freezer’) to address issues arising from cybercrime and communications fraud and annoyances of various kinds.

The idea cannot possibly work on technical grounds. More worryingly, it displays a lack of awareness of the realities of cybersecurity that is astounding.

The first peculiarity is that the Centre is supposed to address four problems: email spam, unsolicited phone calls, fake communications a la Facebook, and malware. Although these have a certain superficial similarity — they all annoy individuals — they do not raise the same kinds of technical issues underneath, and no one person could be an expert in detecting, let along prosecuting all of them. It’s a bit like trying to amalgamate the Salvation Army and the police force because they both wear uniforms and help people!

The Centre will rely on reports from individuals: get a spam email and forward it to the Centre, for example. One of the troubles with this idea is that individuals don’t usually have enough information to report such things in a useful way, and they don’t make good starting points for an eventual prosecution. Canada already has a way to report unsolicited phone calls but it only works for people who almost keep the law by announcing who they are at the beginning. The annoying (and illegal) robocalls can’t be reported because the person who gets them doesn’t know where they are coming from and who’s making them. And where there are prosecutions, each person who reports such a call has to sign an affidavit that the purported call did actually happen to provide the legal basis for the incident.

The second, huge, problem with this idea is that, if individuals can report bad incidents, then spammers can also report fake bad incidents! And they can do it in such volume that investigators will have no way to distinguish the real from the fake. Creating fake spam emails and evading mechanisms such as captchas to prevent wholesale reporting  is very easy.

There is also the deeper problem that besets all cybersecurity — attribution. It is always hard to trace cyberexploits back to their origins, and these origins are overwhelmingly likely to be computers taken over by botnets anyway. Working back along such chains to find someone to prosecute is tedious and expert work that depends on starting from as much information as possible.

The right way to address this problem is to set up honeytraps — machines and phones that seem to be ordinary but are instrumented so that, when an exploit happens, as much information as possible is collected at the time. Now there is a foundation for deciding which incidents are worth pursuing and starting out in pursuit with the best possible information. And, who knows, the knowledge that such systems are out there might dampen some of the enthusiasm on the part of the bad guys.

U.S. versus Canadian political spin

I’ve gone back and done a little work analyzing the party leaders’ speeches in the Canadian federal election in 2006, comparing the language patterns to those of the current U.S. presidential contenders.

In the overall ranking of spin, 38 of the 50 Canadian speeches I’ve collected rank higher on spin than any of the U.S. speeches.

This probably doesn’t mean that Canadian politicians spin more than U.S. politicians, at least not directly. In general, the Canadian speeches have much shorter sentences and are shorter overall. This limits the opportunity to use exclusive words at all which, in turn, means that they can’t play much of a role in marking deception.

It may be that the Canadian electoral system, in which election campaigns are measured in weeks, rather than the gruelling years of American races, makes it easier for politicians to maintain a facade. It may also be that the Canadian media put less pressure on politicians.

It goes to show the importance of measuring spin (and deception) within a set of contextually appropriate texts. Failing to work out what the norms are in a particular domain can make a particular speech look particularly sincere or deceptive even when it is not.