Archive for the 'Uncategorized' Category



Detecting abusive language online

My student, Hannah Leblanc, has just defended her thesis looking at predicting abusive language. The document is

https://qspace.library.queensu.ca/handle/1974/26252

Rather than treat this as an empirical problem — gather all the signal you can, select attributes using training data, and then build a predictor using those attributes — she started with models of what might drive abusive language. In particular, abuse may be associated with subjectivity (objective language is less likely to be abusive, even if it contains individual words that might look abusive) and with otherness (abuse often results from one group targeting another). She also looked at emotion and mood signals and their association with abuse.

All of the models perform almost perfectly at detecting non-abuse; they struggle more with detecting abuse. Some of this comes from mislabelling — documents that are marked as abusive but really aren’t; but much of the rest comes from missing signal — abusive words disguised so that they don’t match the words of a lexicon.

Overall the model achieves accuracy of 95% and F-score of 0.91.

Software quality in another valley

In the mid-90s, there was a notable drop in the quality of software, more or less across the board. The thinking at the time was that this was a result of software development businesses (*cough* Microsoft) deciding to hire physicists and mathematicians because they were smarter (maybe) than computer scientists and, after all, building software was a  straightforward process as long as you were smart enough. This didn’t work out so well!

But I think we’re well into a second version of this pattern, driven by a similar misconception — that coding is the same thing as software design and engineering — and a woeful ignorance of user interface design principles.

Here are some recent examples that have crossed my path:

  • A constant redesign of web interfaces that doesn’t change the functionality, but moves all of the relevant parts to somewhere else on the screen, forcing users to relearn how to navigate.
  • A complete ignorance of Fitt’s Law (the bigger the target, the faster you can hit it with mouse or finger), especially, and perhaps deliberately, those that kill a popup. Example: CNN’s ‘breaking news’ banner across the top.
  • My android duckduckgo app has a 50:50 chance, when you hit the back button, of going back a page or dropping you out of the app.
  • The Fast Company web page pops up two banners on every page (one inviting you to sign up for their email newsletter and one saying that they use cookies) which together consume more than half of the real estate. (The cookies popup fills the screen in many environments; thanks, GDPR!)
  • If you ask Alexa for a news bulletin, it starts at the moment you stopped listening last time — except that that was typically yesterday, so it essentially starts at a random point. (Yes, it tells you that you can listen to the latest episode, but the spell it requires to make this happen is so unclear I haven’t worked it out.)
  • And then there’s all the little mysteries: Firefox addins that seem to lose some of their functionality, Amazon’s Kindle book deal of the day site doesn’t list the deals of the day.

There are several possible explanations for this wave of loss of quality. The first is  the one I suggested above: that there’s an increase in unskilled software builders who just are not able to build robust products, especially apps. About a third of our undergraduates seem to have side hustles where they’re building apps, and the quality of the academic work they submit doesn’t suggest that these apps represent value for money, even if free.

Second, it may be that the environments in which new software is deployed have reached a limit where robustness is no longer plausible. This could be true at the OS level (e.g. Windows), or phone systems (e.g. Android) or web browsers. In all of these environments the design goal has been to make them infinitely extensible but also (mostly) backwards compatible — and maybe this isn’t really possible. Certainly, it’s easy to get the impression that the developers never tried their tools — “How could they not notice that” is a standard refrain.

Third, it may be that there’s a mindset among the developers of free-to-the-user software (where the payment comes via monetising user behaviour) that free software doesn’t have to be good software — because the punters will continue to use it, and how can they complain?

Whichever of these explanations (or some other one) is true, it looks like we’re in for a period in which our computational lives are going to get more irritating and expensive.

‘AI’ performance not what it seems

As I’ve written about before, ‘AI’ tends to be misused to refer to almost any kind of data analytics or derived tool — but let’s, for the time being, go along with this definition.

When you look at the performance of these tools and systems, it’s often quite poor, but I claim we’re getting fooled by our own cognitive biases into thinking that it’s much better than it is.

Here are some examples:

  • Netflix’s recommendations for any individual user seem to overlap 90% with the ‘What’s trending’ and ‘What’s new’ categories. In other words, Netflix is recommending to you more or less what it’s recommending to everyone else. Other recommendation systems don’t do much better (see my earlier post on ‘The Sound of Music Problem’ for part of the explanation).
  • Google search results are quite good at returning, in the first few links, something relevant to the search query, but we don’t ever get to see what was missed and might have been much more relevant.
  • Google News produces what, at first glance, appear to be quite reasonable summaries of recent relevant news, but when you use it for a while you start to see how shallow its selection algorithm is — putting stale stories front and centre, and occasionally producing real howlers, weird stories from some tiny venue treated as if they were breaking and critical news.
  • Self driving cars that perform well, but fail completely when they see certain patches on the road surface. Similarly, facial recognition systems that fail when the human is wearing a t-shirt with a particular patch.

The commonality between these examples, and many others, is that the assessment from use is, necessarily, one-sided — we get to see only the successes and not the failures. In other words (HT Donald Rumsfeld), we don’t see the unknown unknowns. As a result, we don’t really know how well these ‘AI’ systems really do, and whether it’s actually safe to deploy them.

Some systems are ‘best efforts’ (Google News) and that’s fair enough.

But many of these systems are beginning to be used in consequential ways and, for that, real testing and real public test results are needed. And not just true positives, but false positives and false negatives as well. There are two main flashpoints where this matters: (1) systems that are starting to do away with the human in the loop (self driving cars, 737 Maxs); and (2) systems where humans are likely to say or think ‘The computer (or worse, the AI) can’t be wrong’; and these are starting to include policing and security tools. Consider, for example, China’s social credit system. The fact that it gives low scores to some identified ‘trouble makers’ does not imply that everyone who gets a low score is a trouble  maker — but this false implication lies behind this and almost all discussion of ‘AI’ systems.

Huawei’s new problem

The Huawei Cyber Security Evaluation Centre (HCSEC) is a joint effort, between GCHQ and Huawei, to increase confidence in Huawei products for use in the UK. It’s been up and running since 2013.

In its 2018 report, the focus was on issues of replicable builds. Binaries compiled in China were not the same size as binaries built in the UK. To a computer scientist, this is a bad sign since it suggests that the code contains conditional compilation statements such as:

If country_code == UK

insert backdoor

In the intervening year, they have dug into this issue, and the answer they come up with is unexpected. It turns out that the problem is not a symptom of malice, but a symptom of incompetence. The code is simply not well enough engineered to produce consistent results.

Others have discussed the technical issues in detail:

https://www.theregister.co.uk/2019/03/28/hcsec_huawei_oversight_board_savaging_annual_report/

but here are some quotes from the 2019 report:

“there remains no end-to-end integrity of the products as delivered by Huawei and limited confidence on Huawei’s ability to understand the content of any given build and its ability to perform true root cause analysis of identified issues. This raises significant concerns about vulnerability management in the long-term”

“Huawei’s software component management is defective, leading to higher vulnerability rates and significant risk of unsupportable software”

“No material progress has been made on the issues raised in the
previous 2018 report”

“The Oversight Board continues to be able to provide only limited
assurance that the long-term security risks can be managed in the
Huawei equipment currently deployed in the UK”

Not only is the code quality poor, but they see signs of attempts to cover up the shortcuts and practices that led to the issue in the first place.

The report is also scathing about Huawei’s efforts/promises to clean up its act; and they estimate a best case timeline of 5 years to get to well-implemented code.

5G (whatever you take that to mean) will be at least ten times more complex than current networking systems. I think any reasonable computer scientist would conclude that Huawei will simply be unable to build such systems.

Canada, and some other countries, are still debating whether or not to ban Huawei equipment. This report suggests that such decisions can be depoliticised, and made based purely on economic grounds.

But, from a security point of view, there’s still an issue — the apparently poor quality of Huawei software creates a huge threat surface that can be exploited by the governments of China (with or without Huawei involvement), Russia, Iran, and North Korea, as well as non-state actors and cyber criminals.

(Several people have pointed out that other network multinationals have not been scrutinised at the same depth and, for all we know, they may be just as bad. This seems to me implausible. One of the unsung advantages that Western businesses have is the existence of NASA, which has been pioneering reliable software for 50 years. If you’re sending a computer on a one-way trip to a place where no maintenance is possible, you pay a LOT of attention to getting the software right. The ideas and technology developed by NASA have had an influence in software engineering programs in the West that has tended to raise the quality of all of the software developed there. There have been unfortunate lapses, whenever the idea that software engineering is JUST coding becomes popular (Windows 95, Android apps) but overall the record is reasonably good. Lots better than the glimpse we get of Huawei, anyway.)

Annular similarity

When similarity is used for clustering, then obviously the most similar objects need to be placed in the same cluster.

But when similarity is being used for human consumption, a different dynamic is in play — humans usually already know what the most similar objects are, and are interested in those that are (just) beyond those.

This can be seen most clearly in recommender systems. Purchase an item or watch a Netlflix show, and your recommendation list will fill up with new objects that are very similar to the thing you just bought/watched.

From a strictly algorithm point of view, this is a success — the algorithm found objects similar to the starting object. But from a human point of view this is a total fail because it’s very likely that you, the human, already know about all of these recommended objects. If you bought something, you probably compared the thing you bought with many or all of the objects that are now being recommended to you. If you watched something, the recommendations are still likely to be things you already knew about.

The misconception about what similarity needs to mean to be useful to humans is at the heart of the failure of recommender systems, and even the ad serving systems that many of the online businesses make their money from. Everyone has had the experience of buying something, only to have their ad feed (should they still see it) fill up with ads for similar products (“I see you just bought a new car — here are some other new cars you might like”).

What’s needed is annular similarity — a region that is centred at the initial object, but excludes new objects that are too similar, and focuses instead on objects that are a bit similar.

Amazon tries to do this via “People who bought this also bought” which can show useful add-on products. (They also use “People who viewed this also viewed” but this is much less effective because motivations are so variable.) But this mechanism also fails because buying things together doesn’t necessarily mean that they belong together — it’s common to see recommendations based on the fact that two objects were on special on the same day, and so more likely to be bought together because of the opportunity, rather than any commonality.

Annular similarity is also important in applications that help humans to learn new things: web search, online courses, intelligence analysis. That’s why we built the ATHENS divergent web search engine (refs below) — give it some search terms and it returns (clusters of) web pages that contain information that is just over the horizon from the search terms. We found that this required two annuli — we first constructed the information implicit in the search terms, then an annulus around that of information that we assumed would be known to someone who knew the core derived from the search terms, and only then did we generate another annulus which contains the results returned.

We don’t know many algorithmic ways to find annular similarity. In any distance-based clustering it’s possible, of course, to define an annulus around any point. But it’s tricky to decide on what the inner and outer radii should be, the calculations have to happen in high-dimensional space where the points are very sparse, and it’s not usually clear whether the space is isotropic.

Annular similarity doesn’t work (at least straightforwardly) in density-based (e.g. DBScan) or distribution-based clustering (e.g. EM) because the semantics of ‘cluster’ doesn’t allow for an annulus.

One way that does work (and was used extensively in the ATHENS system) is based on singular vallue decomposition (SVD). An SVD projects a high-dimensional space into a low-dimensional one in such a way as to preserve as much of the variation as possible. One of its useful side-effects is that a point that is similar to many other points tends to be projected close to the origin; and a point that is dissimilar to most other points also tends to be projected close to the origin because the dimension(s) it inhabits have little variation and tend to be projected away. In the resulting low-dimensional projection, points far from the origin tend to be interestingly dissimilar to those at the centre of the structure — and so an annulus imposed on the embedding tends to find an interesting set of objects.

Unfortunately this doesn’t solve the recommender system problem because recommenders need to find similar points that have more non-zeroes than the initial target point — and the projection doesn’t preserve this ordering well. That means that the entire region around the target point has to be searched, which becomes expensive.

There’s an opportunity here to come up with better algorithms to find annular structures. Success would lead to advances in several diverse areas.

(A related problem is the Sound of Music problem, the tendency for a common/popular object to muddle the similarity structure of all of the other objects because of its weak similarity to all of them. The Sound of Music plays this role in movie recommendation systems, but think of wrapping paper as a similar object in the context of Amazon. I’ve written about this in a previous post.)

 

Tracy A. Jenkin, Yolande E. Chan, David B. Skillicorn, Keith W. Rogers:
Individual Exploration, Sensemaking, and Innovation: A Design for the Discovery of Novel Information. Decision Sciences 44(6): 1021-1057 (2013)
Tracy A. Jenkin, David B. Skillicorn, Yolande E. Chan:
Novel Idea Generation, Collaborative Filtering, and Group Innovation Processes. ICIS 2011
David B. Skillicorn, Nikhil Vats:
Novel information discovery for intelligence and counterterrorism. Decision Support Systems 43(4): 1375-1382 (2007)
Nikhil Vats, David B. Skillicorn:
Information discovery within organizations using the Athens system. CASCON 2004: 282-292

 

China-Huawei-Canada fail

Huawei has been trying to convince the world that they are a private company with no covert relationships to the Chinese government that might compromise the security of their products and installations.

This attempt has been torpedoed by the Chinese ambassador to Canada who today threatened ‘retaliation’ if Canada joins three of the Five Eyes countries (and a number of others) in banning Huawei from provisioning 5G networks. (The U.K. hasn’t banned Huawei equipment, but BT is uninstalling it, and the unit set up jointly by Huawei and GCHQ to try to alleviate concerns about Huawei’s hardware and software has recently reported that it’s less certain about the security of these systems now than it was when the process started.)

It’s one thing for a government to act as a booster for national industries — it’s another to deploy government force directly.

China seems to have a tin ear for the way that the rest of the world does business; it can’t help but hurt them eventually.

The cybercrime landscape in Canada

Statscan recently released the results of their survey of cybercrime and cybersecurity in 2017 (https://www150.statcan.gc.ca/n1/pub/71-607-x/71-607-x2018007-eng.htm).

Here are some the highlights:

  • About 20% of Canadian businesses had a cybersecurity incident (that they noticed). Of these around 40% had no detectable motive, another 40% were aimed at financial gain, and around 23% were aimed at getting information.
  • More than half had enough of an impact to prevent the business operating for at least a day.
  • Rates of incidents were much higher in the banking sector and pipeline transportation sector (worrying), and in universities (not unexpected, given their need to operate openly).
  • About a quarter of businesses don’t use an anti-malware tool, about a quarter do not have email security (not clear what this means, but presumably antivirus scanning of incoming email, and maybe exfiltration protection), and almost a third do not have network security. These are terrifying numbers.

Relatively few businesses have a policy re managing and reporting cybersecurity incidents; vanishingly few have senior executive involvement in cybersecurity.

It could be worse, but this must be disappointing to those in the Canadian government who’ve been developing and pushing out cyber awareness.

People in glass houses

There’s a throwaway line in Woodward’s book about the Trump White House (“Fear”, Simon and Schuster, 2018) where he says that the senior military were unwilling to carry out offensive cyber-offensive operations because they didn’t think the US would fare well under retaliation.

Then this week the GAO came out with a report on cybersecurity in DOD weapons systems (as opposed to DOD networks). It does not make happy reading. (Full report).

Here’s what seems to me to be the key quotation:

“We found that from 2012 to 2017, DOD testers routinely found mission critical cyber vulnerabilities in nearly all weapon systems that were under development. Using relatively simple tools and techniques, testers were able to take control of these systems and largely operate undetected”

Almost every word could be italicized and many added exclamation marks would hardly suffice.

To be fair, some of these systems are still under development. But the report makes clear that, for many of them, cybersecurity was not really considered in their design. The typical assumption was that weapons systems are standalone. But in a world where software runs everything, there has to be a mechanism for software updates at least, and so a connection to the outside world. As the Iranians discovered, even update from a USB is not attack-proof. And security is a difficult property to retrofit, so these systems will never be as cyberattack resistant as we might all have wished.

Predicting fraud risk from customer online properties

This interesting paper presents the results of an investigation into how well a digital footprint, properties associated with online interactions with a business such as platform and time of day, can be used to predict risk of non-payment for a pay-on-delivery shopping business.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3163781

The properties that are predictive are not, by themselves, all that surprising: those who shop in the middle of the night are higher risk, those who come from price comparison web sites  are lower risk, and so on.

What is surprising is that, overall, predictive performance rivals, and perhaps exceeds, risk prediction from FICO (i.e. credit scores) — but these properties are much easier to collect, and the model based on them can be applied to those who don’t have a credit score. What’s more, the digital footprint and FICO-based models are not very correlated, and so using both does even better.

The properties collected for the digital fingerprint are so easy to collect that almost any online business or government department could (and should) be using them to get a sense of their customers.

I’ve heard (but can’t find a reference) that Australian online insurance quotes vary in price based on what time of day they are requested — I suppose based on an intuition that procrastination is correlated with risky behaviour. I’d be grateful if anyone has details of any organisation that it using this kind of predictive model for their customers.

Businesses like Amazon require payment up front — but they also face a panel of risks, including falsified payments (stolen credit cards) and delivery hijacking. Approaches like this might well help in detecting these other kinds of fraud.

Blockchain — hype vs reality

I was at a Fintech conference last week where (not surprisingly) there was a lot of talk about blockchains and their potential impact in the banking and insurance sectors. There seem to me to be two issues that don’t get enough attention (although wiser heads than mine have written about both):

  1. The current implementations are much too heavy-weight to survive. I’ve seen numbers like 0.2% of the entire world’s energy is already going to drive blockchains, the response times are too slow for many (?most) real applications, and the data sizes that can be preserved on a blockchain are too small. This seems like a situation where being the second or third mover is a huge advantage. For example the Algorand approach seems to have substantial advantages over current blockchain implementations.
  2. The requirement to trust an intermediary (disintermediation) changes to a requirement to trust the software developers who implement the interfaces to the blockchain. There are known examples where deposits to a cryptocurrency have permanently disappeared as the result of software bugs in the deposit code (video of a talk by Brendan Cordy); and this will surely get worse as blockchains are used for more complex things. The fundamental issue is that using a blockchain requires atomic commit operations, and software engineering is not really up for the challenge (although this is an interesting area where formal methods might make a difference).

Cash and money laundering

Although privacy advocates often favour continuing a cash economy, it’s becoming clear that cash is heavily the place where bad things happen.

There are two ways in which cash is used in the black (and often criminal) economy. The first is as the basis of a barter economy — those involved buy and sell almost exclusively in cash, and their activities never get onto the radar of banks, taxation departments, or financial intelligence units. There’s not much that can be done about this, except that those who live in the cash economy usually have a lifestyle that’s much higher than their official income, so tax authorities might take an interest.

Cash is much more usable if it can be converted into electronic currency in banks. The number of ways this can be done is steadily diminishing. In Canada, banks and other financial institutions are required to report cash deposits above $10,000 unless they can show that they come from routine business activities (and there are quite specific rules about what this means). There are some loopholes, but not many and not big.

Australia has just taken steps towards banning cash deposits above $10,000 altogether. The uproar has been revealing.

One operator of an armoured car business complained to the media that he was moving ~$5 million a month in cash, about half from car dealers, and that this would ruin his business. The treasurer’s response was, roughly speaking, “Good”. (Businesses that sell transport already have quite strong restrictions about their cash activities in many countries.)

http://www.news.com.au/finance/economy/federal-budget/im-being-kicked-in-the-teeth-security-company-owner-says-10000-cash-limit-will-demolish-him/news-story/05c71552b8f5c63c6846b13335763063

Also in Australia, the proportion of cash transactions dropped to 37% by 2016, with a corresponding drop in the total value they represent. However, the ratio of physical currency to GDP is at an all time high; there is $3000 in circulation for every Australian.

https://www.smh.com.au/business/the-economy/we-re-turning-from-cash-but-demand-for-notes-has-never-been-higher-20180510-p4zegm.html

There are either some people with vast sums stashed under their mattresses, or this money is being used mostly by criminals. (Hint: it’s the latter.)

It’s no wonder that many countries are trying to be more aggressive about reducing the cash in circulation, mostly by removing high-denomination notes (because more value can be packed into a tighter space).

But it also seems clear that banks can’t resist to lure of large cash deposits, no matter how much they should. And as long as banks don’t tell them, country’s financial intelligence units can’t do a lot about it.

 

The Sound of Music Problem

Recommender and collaborative filtering systems all try to recommend new possibilities to each user, based on the global structure of what all users like and dislike. Underlying such systems is an assumption that we are all alike, just in different ways.

One of the thorny problems of recommender systems is the Sound of Music Problem — there are some objects that are liked by almost everybody, although not necessarily very strongly. Few people hate “The Sound of Music” (TSOM) film, but it’s more a tradition than a strong favourite. Objects like this — seen/bought by many, positively rated but not strongly — cause problems in recommender systems because they create similarities between many other objects that are stronger than ‘they ought to be’. If A likes an object X and TSOM, and B likes Y and TSOM, then TSOM makes A and B seem more similar than they would otherwise be; and also makes X and Y seems more similar as a consequence. When the common object is really common, this effect seriously distorts the quality of the recommendations. For example, Netflix’s list of recommendations for each user has substantial overlap with its ‘Trending Now’ list — in other words, Netflix is recommending, to almost everyone, the same set of shows that almost everyone is watching. Nor is this problem specific to Netflix: Amazon makes similar globally popular recommendations. Less obviously, Google’s page ranking algorithm produces a global ranking of all pages (more or less), from which the ones containing the given search terms are selected.

The property of being a TSOM-like object is related to global popularity, but it isn’t quite the same. Such an object must have been seen/bought by many people, but it is it’s place in the network of objects that causes the trouble, not its high frequency of having been rated as such.

Solutions to this problem have been known from a while (For example, Matt Brand’s paper http://www.merl.com/reports/docs/TR2005-050.pdf outlines a scalable approach that works well in practice. This work deserves to be better known). It’s a bit of a mystery why such solutions haven’t been used by all of the businesses that do recommendation, and so why we live with recommendations that are so poor.

Provenance — the most important ignored concept

I’ve been thinking lately about the problems faced by Customs and Border Protection pieces of government. The essential problem they face is: when an object (possibly a person) arrives at a border, they must decide whether or not to allow that object to pass, and whether to charge for the privilege.

The basis for this decision is never the honest face of the traveller or the nicely wrapped package. Rather, all customs organisations gather extra information to use. At one time, this was a document of some sort accompanying the object, and itself certified by someone trustworthy; increasingly it is data collected about the object before it arrived at the border, and also the social or physical network in which it is embedded. For humans, this might be data provided on a visa application, travel payment details, watch lists, and social network activity. For physical objects, this might be shipper and destination properties and history.

In other words, customs wants to know the provenance of every object, in as much detail as possible, and going back as far as possible. All of the existing analytic techniques in this space are approximating that provenance data and how it can be leveraged.

Other parts of government, for example law enforcement and intelligence, are also interested in provenance. For example, sleeper agents are those embedded in another country with a false provenance. The use of aliases is an attempt to break chains of provenance.

Ordinary citizens are wary of this kind of government data collection, feeling that society rightly depends on the existence of two mechanisms for deleting parts of our personal history: ignoring non-consequential parts (youthful hijinks, sealed criminal records) at least after they disappear far enough into the past; and forgiving other parts. There’s a widespread acceptance that people can change and it’s too restrictive to make this impossible.

The way we handle this today is that we explicitly delete certain information from the provenance, but make decisions as if the provenance were complete. This works all of the way from government level to the level of a marriage.

There is another way to handle it, though. That is to keep the provenance complete, but make decisions in ways that acknowledge explicitly that using some data is less appropriate. The trouble is that nobody trusts organisations to make decisions in this second way.

There’s an obvious link between blockchains and provenance, but the whole point of a blockchain is to make erasure difficult. So they will really only have a role to play in this story if we, collectively, adopt the second mechanism.

The issues become more difficult if we consider the way in which businesses use our personal information. Governments have a responsibility to implement social decisions and so are motivated to treat individual data in socially approved ways. Businesses have no such constraint — indeed they have a fiduciary responsibility to leverage data as much as they can. A business that knows data about me has no motivation to delete or ignore some of it.

This is the dark side of provenance. As I’ve discussed before, the ultimate goal of online businesses is to build models of everyone on the planet and how much they’re willing to pay for every product. In turn, sales businesses can use these models to apply differential pricing to their customers, and so increase their profits.

This sets up an adversarial situation, where consumers are trying to look like their acceptable price is much lower than it is. Having a complete, unalterable personal provenance makes this much, much more difficult. But by the time most consumers realise this, it will be too late.

In all of these settings, the data collected about us is becoming so all-encompassing and permanent that it is going to change the way society works. Provenance is an under-discussed topic, but it’s becoming a central one.

What is the “artificial intelligence” that is all over the media?

Anyone who’s been paying attention to the media will be aware that “artificial intelligence” is a hot topic. And it’s not just the media — China recently announced a well-funded quasi-industrial campus devoted to “artificial intelligence”.

Those of us who’ve been around for a while know that “artificial intelligence” becomes the Next Big Thing roughly every 20 years, and so are inclined to take every hype wave with a grain (or truckload) of salt. Building something algorithmic that could genuinely be considered intelligent is much, much, much harder than it looks, and I don’t think we’re even close to solving this problem. Not that we shouldn’t try, as I’ve argued earlier, but mostly for the side-effects a try will spin off.

So what is it that the media (etc.) think has happened that’s so revolutionary? I think there are two main things:

  1. There’s no question that deep learning has made progress at solving some problems that have been intractable for a while, notably object recognition and language manipulations. So it’s not surprising that there’s a sense that suddenly “artificial intelligence” has made progress. However, much of this progress has been over-hyped by the businesses doing it. For example, object recognition has this huge, poorly understood weakness that a pair of images that look to us identical can produce the correct identification of the objects they contain from one image, and complete garbage from the other, using the same algorithm. In other words, the continuity between images that somehow “ought” to be present when there are only minor pixel level changes is not detected properly by the algorithmic object recogniser. In the language space, word2vec doesn’t seem to do what it’s claimed to do, and the community has had trouble reproducing some of the early successes.
  2. Algorithms are inserting themselves into decision making settings which previously required humans in the loop. When machines supplemented human physical strength, there was an outcry about the takeover of the machines; when algorithms supplemented human mental abilities, there was an outcry about the takeover of the machines; and now that algorithms are controlling systems without humans, there’s a fresh outcry. Of course, this isn’t as new as it looks — autopilots have been flying planes better than human pilots for a while, and automated train systems are commonplace. But driverless vehicles have pushed these abilities into public view in a new, forceful way, and it’s unsettling. And militaries keep talking about automated warfare (although this seems implausible given current technology).

My sense is that these developments are all incremental, from a technical perspective if not from a social one. There are interesting new algorithms, many of them simply large-scale applications of heuristics, but nothing that qualifies as a revolution. I suspect that “artificial intelligence” is yet another bubble, as it was in the 60s and in the 80s.

New data on human trafficking

The UN’s International Organisation for Migration has just released a large tranche of data about migration patterns, and a slick visualization interface:

https://www.ctdatacollaborative.org/

Some of the results are quite counter-intuitive. Worth a look.

A gentle guide to money laundering

Money laundering is the process of legitimising money obtained from illegal activities (‘proceeds of crime’), moving it into the financial system in such a way that either it fails to attract the attention of authorities, or there is a plausible reason that can be used to explain its existence. The first mechanism is preferable because it also avoids liability for tax.

Much, although not all, illicit money is collected as cash, from activities such as drug sales, theft, and ransoms. It is possible to keep this as cash and spend it within a cash environment, but this is limiting to the owner mostly because it is dangerous to spend it in large amounts. Law enforcement have long used the tactic of watching for people who spend more than they legitimately earn. National tax authorities also watch for spending that is greater than declared income. Until relatively recently, many countries imposed a wall between their tax departments and their law enforcement to encourage criminals to declare, and pay tax on, the proceeds of crime. Famously Al Capone was prosecuted for tax crimes, not organised crime. Thus it is natural the criminals want to find ways to take cash and convert it into explained resources in the mainstream financial system. Once in the financial system, opportunities for moving it around, and so blurring the trail, are much greater.

National money laundering

We begin by considering money laundering in a national context, that is within a single country. It is still possible to buy some expensive assets for cash, although opportunities are diminishing. Some of these assets are attractive because they are easy to transfer without any record (e.g. jewelry) so the eventual holder of such an asset cannot be linked back to the individual who bought it. In some Western countries it is still possible to buy a property worth millions and walk into the real-estate broker’s office with the purchase price in cash. However, even in these countries this probably requires some level of collusion or wilful blindness on the part of the real-estate broker. Many jurisdictions have begun to push ‘know your customer’ regulations out to any profession that handles large amounts of money, requiring them to report large cash transactions. The opportunities for directly converting cash into valuable items are shrinking.

The other way to legitimise cash in a single-country setting is to leverage a suitable business. The best businesses are those where the gap between inputs (costs of raw materials) and outputs (prices charged) is large. For example, a pizza business may sell 1000 pizzas a month but buy the raw ingredients for 5000 pizzas a month, flushing the unneeded ingredients away. The cost of the discarded raw ingredients is much lower than the price of finished pizzas. The cash to be laundered is accounted for as cash purchases of the non-existent pizzas, and so becomes part of the apparently legitimate profits of the business. It is difficult for law enforcement to demonstrate that not enough real pizzas were sold, and so that the business must be partly fraudulent. This mechanism depends on the claim that many customers pay with cash, and this is becoming less and less sustainable as debit cards etc. are more widely used, even for small purchases.

Another suitable business is art, because the costs of the raw materials to create, say, a painting are small compared to the selling price, which could be a million times greater. The art market worldwide is secretive, and anonymity of purchasers is commonplace. It is possible to create an art dealership and generate paintings that are apparently sold for large sums in cash to unidentified buyers. The sums paid by these fictitious buyers are made up from the laundered cash. This mechanism is more difficult to use than creating retail businesses, since it requires an art expert and an art creator who is good enough to create works with plausibly large prices.

Businesses where large amounts change hands can also be exploited, for example casinos, where wins that are provably legitimate can be bought at a discount price from winners desperate for quick money. However, casinos are amongst the most instrumented and analysed locations in the world, so any interaction within a casino risks leaving a record, visible both the casino analytics and potentially then to law enforcement.

Banks in several Western countries have mechanisms that allow anonymous cash deposits, although into known accounts. In Australia, ATMS allow deposits of large amounts of cash, often into accounts set up by foreign tourists for plausible purposes. In Canada, cash deposit mechanisms are a holdover from the days when many businesses needed to deposit cash takings after banking hours. Amounts of the magnitude of around $25,000 can be deposited in this way, and then moved through the domestic financial system and between banks, so that the trail is difficult, perhaps impossible to follow.

Within a single national jurisdiction, there are fewer and fewer ways to convert cash to other kinds of mainstream assets without drawing attention. Moving money across borders, although it increases some kinds of risks, also has advantages because of the weaknesses of national government organisations at cooperation with other countries. Also many countries have little interest in money laundering, and so serve as havens for doing so.

International money laundering.

The main reasons for money laundering across national borders are that it provides a natural break in the chain of ownership, and discontinuities in jurisdiction. Although law enforcement organisations cooperate between countries, this cooperation is usually more difficult than within a single country, with different laws applying, different search and seizure rules, and different priorities. Tax departments also tend to be nationally focused.

It is also common for those who benefit from criminal activity to live in countries other than those where the crimes are committed. For example, the heads of drug production and smuggling cartels tend to live in Central and South America, while their profits are primarily made in the U.S. and Brazil. To access their profits, these must be moved from one country to another.

There are three qualitatively distinct mechanisms for moving money across borders, each with their own issues: objects of value can be physically moved; money can be moved through the global financial system; and value can be transferred virtually, that is without any actual movement of anything.

Moving objects of value.

The most obvious way to do this is to move cash physically. Travellers are supposed to declare amounts above a certain threshold (the $10,000 limit), but it is far from clear how likely someone carrying large amounts of currency is to be detected, either as an outgoing passenger (where checks are quite weak) or at Customs as an incoming passenger (which might be riskier, except that the countries where transnational criminals reside also tend to be those with weak border controls). For example, the U.S. estimates that at least $40 billion in physical currency is smuggled across the U.S.-Mexico border each year, while programs aimed at finding and stopping it have interdicted less than $100 million. Currency can also be shipped as cargo and, again, the level of risk this carries is far from clear. There have been a non-trivial number of interdictions of currency in air shipments, and in vehicles across land borders. The reluctance of retailers to accept any but pristine U.S. dollar bills in South America suggests that owning bills that are visibly used attracts suspicion in those countries. This in turn suggests that authorities in those countries are aware of illicit US currency shipments that end up in circulation in South American countries. One of the problems with currency is that it is quite bulky for its value, and sometimes has a detectable smell (e.g. U.S. dollars).

The practice of smurfing, carrying currency just below the declarable amount, is common and existing legal regimes make this difficult to suppress.

Another way to move value is to convert cash into small, portable, valuable items and transport these instead. The difficulty here is that the purchase of such items for cash is increasingly raising suspicion, as described above (although a number of small purchases over time could build up a reservoir of valuable items – for example, stored value cards can be purchased in modest quantities at a time, while accumulating large totals). The exception is art, for which large cash purchases are still the norm. Art has the added advantage that its declared value for border crossing need have little to do with its realisable value, and so it may not draw much attention from Customs. Thus while smuggling high-value diamonds carries some risk, smuggling art is almost totally without risk.

Bearer negotiable instruments also provide a way to move value (still with the risks associated with obtaining them). These are almost impossible to detect at borders, since they can be embodied in a single piece of paper. In some jurisdictions (e.g. Australia) there is no requirement to report a bearer negotiable instrument sent by mail.

Another way to move value is invoice fraud. A shipper in country A sends a legitimate product to country B, but charges an exorbitant price for it. When the recipient pays the exorbitant price, a confederate in country A receives the difference between the actual price and the exorbitant price from the shipper so that value is transferred from country B to country A.

Moving money through the global financial system.

The global financial system exists to move money between countries, but such movements must be traceable by the financial institutions involved, and the records of these movements are increasingly accessible to governments. Thus a criminal moving money internationally must take steps to make the records of the movements seem innocuous. Banks are supposed to report transfers that they consider suspicious. The extent to which banks take this seriously, and their working definitions of ‘suspicious’ are far from clear, but many are looking for patterns of transfer that are structured, that is broken up into several sub-transfers, each designed to look innocuous. Another strategy used by criminals to further blur the existence of a large transfer is to send sub-transfers from different bank branches, using variant names and details of the sender, and using other techniques to disguise the similarity of the sub-transfers. Any one of the individual sub-transfers is designed not to seem suspicious at the particular branch; but it is not difficult for the bank to detect the overall structure, if it looks for it.  There are limits to how much this can be done because the receiver needs to know that the correct total amount has been transferred, and in a timely way; too much blurring of sub-transfers creates opportunities for the sender to purloin some of the money. Banks are increasingly validating precise details of senders in their online transfer interfaces, making it harder to create artificial variations from one sub-transfer to another.

Moving value virtually.

There are a variety of alternative remittance systems (ARSs), of which the best known are the hawalas, that developed to facilitate transfers for people who do not have easy access to banks, or for whom the fees charged by mainstream financial institutions are prohibitive. Historically, the customers of these ARSs were guest workers in rich countries who send a portion of their wages to relatives back home.

When the flow from country A to country B is more or less in balance with the flow in the reverse direction, ARSs need move no money at all. Customers in country B receiving (notionally) money sent from country A are in fact paid with money deposited in country B intended for country A, and vice versa. Short-term imbalances are handled on a trust basis by the ARS bankers at both ends.

A problem arises when the flows between the two countries are not balanced, so that there is a net flow in one direction. The most common net flow is from rich countries to poorer ones and this is also the more probable direction for money laundering flows as well (although people smuggling, for example, may generate flows in the other direction). Net flows must eventually be realised as actual flows.

One way to handle the actual flow is to use conventional financial system transfers. A single financial system transfer is the result of the net of many small transfers collected and aggregated, so the overhead of a standard banking transfer is amortized. The effect of money laundering flows can be concealed in the larger flows arising from ordinary practice. Concealment is sometimes helped by the informal record keeping of many ARSs, although they are now registered in some jurisdictions, and so tracking mechanisms are beginning to be in place.

The second way to handle the required balancing flows is to use cuckoo smurfing. This technique uses a legitimate transfer, in the direction opposite to that required to correct the imbalance, as cover. In other words, if ARS A wants to move $50,000 from to ARS B to correct an imbalance, they find a legitimate transfer from a customer of B to a customer of A of the right size, B takes the deposited funds from B’s customer and keeps them, while A pays $50,000 to A’s customer. Both customers are satisfied, A is $50,000 poorer while B is $50,000 richer, as required, and no money has actually moved through the global financial system. In fact, the only way to detect that this has happened is that the apparent transfer from B’s customer to A’s customer did not leave a trace, when it should have.

The same mechanism can be used directly for money laundering, as long as a matching countervailing innocent flow is available. The lack of trace of the money laundering transfer makes this approach especially attractive.

Cuckoo smurfing requires the existence of substantial innocent transfers in the opposite direction to the net flows between the two countries concerned (typically from poorer countries to richer ones) which may limit the applicability of this technique.

Many developed countries are on a trajectory to block most of these money laundering paths by more detailed regulation, and increased enforcement. However, Financial Intelligence Units (FIUs) – Austrac in Australia, Fintrac in Canada, Fincen in the U.S., and the Financial Intelligence Unit in the U.K. for example – struggle to find and/or block money laundering for several structural reasons.

First, the cornerstone of detection is the reporting, by banks and ASRs, of transactions that they consider to be suspicious or anomalous. While legislation lays out the conditions that should trigger reporting, there is, inevitably, some interpretation leeway; dealing with rich customers is how banks make their money; and so there is some incentive to under-report certain kinds of transactions. In other words, banks make a policy decision about the level of risk they will take, and this decision is not necessarily one that FIUs would agree with. In Zapata et al. v HSBC Holding Plc (2016), HSBC admitted criminal liability for laundering $881 million of the drug cartel proceeds, accepting large sums of money from individuals with no visible source of income. In several other successful terrorist financing prosecutions, well-known multinational banks transferred millions of dollars to terrorist organisations. The Commonwealth Bank of Australia has recently been charged with failing to report more than 50,000 suspicious transactions over the $10,000 threshold, involving large deposits (100s of millions) paid in through ATMs and almost immediately transferred offshore. This suggests that, as well as developing regulations, FIUs need to be more aggressive in their approach to banks. Another aspect of the problem is that FIUs don’t know about the transactions that they aren’t told about, so it is difficult for them to assess the extent of under-reporting. Measuring compliance remains a challenge.

Second, money laundering has become a service, provided by specialists and marketed to criminals. A criminal group can outsource its money laundering needs to an organisation that develops its own intelligence, mechanisms, and experience in the use of techniques for laundering. More importantly, though, the use of such a wholesaler breaks the connection between the crime(s) that generated the money and the money itself – so that proving that intercepted money is actually proceeds of crime becomes much more difficult. Thus when FIUs find transfers that are clearly suspicious, they may be unable to prosecute anyone because they cannot demonstrate that the money is not innocent. There may be a greater role for disruption as a strategy rather than prosecution, as the U.K already does.

One development which will change the face of money laundering is the development of cryptocurrencies such as Bitcoin. These make it possible to decouple the relationship between criminal and crime. For example, drugs are increasingly sold online via Dark Web market sites. Drugs are paid for with Bitcoin and shipped to the buyer through regular postal channels. These market sites operate like other online businesses, in some cases guaranteeing refunds if shipments are interdicted by Customs. But now criminals are paid in untraceable Bitcoins, and it is the buyers who have to convert traceable national currency into untraceable (and international) cryptocurrency. Bitcoin is already accepted by some real-estate brokers, some airlines and travel agencies, and sites such as eBay, so that criminals can spend their Bitcoin on their lifestyle directly. There are also arbitrage businesses that use Bitcoin to buy gift cards which are widely usable. The development of ransomware, for which the ransoms are also collected in cryptocurrency, shows how non-physical criminal ‘services’ can also be delivered without providing a path back to the criminals involved.

Another form of illicit money transfer that is not, strictly speaking, money laundering is the movement of money for evasion of tax. Many forms of tax evasion/avoidance seek to move money without revealing its ownership. Even when ownership is visible, international transfers can be used to reduce the tax payable. An individual in country A sends some money to country B using a mechanism that makes it non-taxable in country A. This money is then moved to country C and is then brought back to country A as incoming money that is again not subject to tax (for example, an international business loan). The money is now available to its original owner, but has been sheltered from tax liability. This is difficult for financial authorities in any of the three countries to detect, since it seems innocuous until the loop is visibly closed.

(Fear not, Dear Reader, that this document reveals anything to criminals that they don’t already know. All of the content is already available from public sources.)

Is “sentiment analysis” doing anything real?

Oceans of computational cycles have been spent analysing the sentiment of documents, driven by businesses interested in how their products are being perceived, movie producers interested in their potential products, and just about everyone about tweets.

Sentiment is based on a measure of how “positive” or “negative” a particular document is. The problem is that there are a number of aspects of an individual that could be positive or negative, and sentiment analysis jams them all into one bucket and measures them. It’s far from clear that this measures anything real — signs of which can be seen in the well-known one-and-a-half star difference when individuals are asked to rate the same objects on two successive days.

So what can be positive and negative?

It could be the individual’s attitude to a particular object and, of course, this is what most systems purport to be measuring. However, attitude is a two-place relation: A’s attitude to B. It’s usually obvious that a document has been written by A, but much more difficult to make sure that the object about which the attitude is being expressed is actually B.

However, most of the difficulty comes from other aspects that can also be positive and negative. One of these is mood. Mood is an internal setting whose drivers are poorly understood but which is known to be (a) predictable over the course of a period of, say, a day, and (b) composed of two independent components, positive mood and negative mood (that is, not opposites). In broad brush terms, negative mood is stable through the day, while positive mood peaks in the middle of the day. There are longer term patterns as well; positive mood tends to increase through the week while negative mood decreases.

Looking at someone’s writing about an object therefore should take into account their underlying mood — but never does. And it would be difficult to tease apart the signals of mood from the signals of attitude with the current state of the art. But we could plausibly predict that “sentiment” would be less positive overall if it was captured at the beginning or end of the day.

The other aspect that can be positive or negative is emotion. Emotions are short-term responses to the current environment that play a role in reordering each individual’s priorities to optimize decision making, especially in response to an external stimulus.  There are two emotions that align strongly with positivity (joy) and negativity (disgust).

Looking at someone’s writing about an object should therefore take into account their emotional state (at the time they were writing) — but never does. Again it would be difficult to tease the signals of emotion and the signals of attitude apart. I have no doubt that many businesses get much worse results from their surveys than they ‘should’ because those surveys are designed so poorly that they become annoying, and this spills over into the content of the responses.

Bottom line: there is no such thing as positive sentiment or negative sentiment. There are positive or negative attitudes, moods, and emotions, but the one that sentiment analysis is trying to measure — attitudes — is inextricably confounded by the other two.  Progress is being made in understanding and detecting moods and emotions, but much less has been done on detecting attitudes, mostly because of the difficulty of finding the intended object within a short piece of text.

 

And so it begins

Stories out today that Google is now able to connect the purchasing habits of anyone it has a model for (i.e. almost everybody who’s ever been online) with Google’s own data on online activity.

For example, this story:

http://www.smh.com.au/technology/technology-news/google-now-knows-when-its-users-go-to-the-store-and-buy-stuff-20170523-gwbnza.html

Google says that this enables them to draw the line between the ads that users have been shown, and the products that they buy. There’s a discrepancy in this story because Google also claim that they don’t get the list of products purchased using a credit card, but only the total amount. So a big hmmmmm.

(And if I were Google, I’d be concerned that there isn’t much of a link! Consumers might be less resentful if Google did indeed serve ads for things they wanted to buy, but everyone I’ve ever heard talk about online ads says the same thing: the ads either have nothing to do with their interests, or they are ads for things that they just bought.)

But connecting users to purchases (rather than ads to purchases) is the critical step to building a model of how much users are willing to pay — and this is the real risk of multinational data collection and analytics (as I’ve discussed in earlier posts).

Lessons from Wannacrypt and its cousins

Now that the dust has settled a bit, we can look at the Wannacrypt ransomware, and the other malware  that are exploiting the same vulnerability, more objectively.

First, the reason that this attack vector existed is because Microsoft, a long time ago, made a mistake in a file sharing protocol. It was (apparently) exploited by the NSA, and then by others with less good intentions, but the vulnerability is all down to Microsoft.

There are three pools of vulnerable computers that played a role in spreading the Wannacrypt worm, as well as falling victim to it.

  1. Enterprise computers which were not being updated in a timely way because it was too complicated to maintain all of their other software systems at the same time. When Microsoft issues a patch, bad actors immediately try to reverse engineer it to work out what vulnerability it addresses. The last time I heard someone from Microsoft Security talk about this, they estimated it took about 3 days for this to happen. If you hadn’t updated in that time, you were vulnerable to an attack that the patch would have prevented. Many businesses evaluated the risk of updating in a timely way as greater than the risk of disruption because of an interaction of the patch with their running systems — but they may now have to re-evaluate that calculus!
  2. Computers running XP for perfectly rational reasons. Microsoft stopped supporting XP because they wanted people to buy new versions of their operating system (and often new hardware to be able to run it), but there are many, many people in the world for whom a computer running XP was a perfectly serviceable product, and who will continue to run it as long as their hardware keeps working. The software industry continues to get away with failing to warrant their products as fit for purpose, but it wouldn’t work in other industries. Imagine the discovery that the locks on a car stopped working after 5 years — could a manufacturer get away with claiming that the car was no longer supported? (Microsoft did, in this instance, release a patch for XP, but well after the fact.)
  3. Computers running unregistered versions of Microsoft operating systems (which therefore do not get updates). Here Microsoft is culpable for an opposite reason. People can run an unregistered version for years and years, provided they’re willing to re-install it periodically. It’s technically possible to prevent (or make much more difficult) this kind of serial illegality.

The analogy is with public health. When there’s a large pool of unvaccinated people, the risk to everyone increases. Microsoft’s business decisions make the pool of ‘unvaccinated’ computers much larger than it needs to be. And while this pool is out there, there will always be bad actors who can find a use for the computers it contains.

Asymmetric haggling

I’ve pointed out before that the real threat to privacy comes not from governments, but from multinationals; and the key model of you that they want to build is how much you’re willing to pay for products and services. They can then use this information directly (airlines, Amazon) or sell it to others (Google).

We’re used to a world in which products and services cost the same for every buyer, but this world is rapidly disappearing. There’s a good article about the state of play in the May issue of the Atlantic:

https://www.theatlantic.com/magazine/archive/2017/05/how-online-shopping-makes-suckers-of-us-all/521448/

The price are quoted in an online setting already depends on the time of day, what platform you’re using, and your previous browsing history.

Of course, fixed prices are a relatively new invention, and haggling is still how prices are determined in much of the world. The difference in the online world is  that the seller has much more data, and modelling capability, to work out what you’re willing to pay than you have about how cheaply the seller is willing to sell. So not only is pricing an adversarial process, it’s become a highly asymmetric one.

This is going to have all sorts of unforeseen consequences: analytics for buyers; buying surrogates; a return to bricks and mortar shopping for some people; less overall buying, … We’ll find out.

In the meantime, it’s always a good idea to shop online using multiple browsers on multiple platforms, deleting cookies and other trackers as much as you can.