Posts Tagged 'cybersecurity'

More thoughts on Huawei

“5G” is marketing speak for whatever is coming next in computer networks. It promises 100 times greater speed and the ability to connect many more devices in a small space. However, “5G” is unlikely to exist as a real thing until two serious problem are addressed. First, there is no killer app that demands this increase in performance. Examples mentioned breathlessly by the media include being able to download an entire movie in seconds (which doesn’t seem to motivate many people), the ability for vehicles to communicate with one another (still years away), and the ability for Internet of Things to communicate widely (the whole communicating lightbulbs phenomenon seems to have put consumers off rather than motivated them). Second, “5G” will require a much denser network of cell towers and it’s far from clear how they will be paid for and powered. The 5G networks touted in the media today require specialized handsets that are incompatible with existing networks and exist only in the downtown cores of a handful of cities. So “5G” per se is hardly a pressing issue.

Nevertheless, it does matter who provides the next generation of network infrastructure because networks have become indispensable to ordinary life – not just entertainment, but communication and business. And that’s why several countries have been so vocal against Huawei’s attempts to become a key player.

There are two significant issues. First, a network switch provider can see, block, or divert all the traffic passing through its switches. Even encrypting the traffic content doesn’t help much; it’s still possible to see who’s communicating with whom and how often. Huawei, however much it claims to the contrary, is subject to Chinese law that requires it to cooperate with the Chinese government and so can never provide neutral services. It doesn’t help to say, as Huawei does, that because it never has acted at the behest of the Chinese government, it never will in the future. Nor does it help to say that no backdoor has ever been found in its software. All network switches have the capability to be updated over the Internet, so the software it is running today need not be the software it is running tomorrow. It is not surprising that many governments, including the US and Australia, have reservations about allowing Huawei to provide network infrastructure.

Second, the next generation of network infrastructure will have to be more complex than what exists now. A long-standing collaboration between the UK and Huawei tried to improve confidence in Huawei products by disassembling and testing them. Their concern, for a number of years, was that supposedly identical software built in China and built in the UK turned out to be of different sizes. This is a bad sign, because it suggests that the software pays attention to where it is being built and modifies itself accordingly (much as VW emissions testing software checked whether the vehicle was undergoing an emissions test and modified its behaviour ). However, their 2019 report concluded that the issue stemmed from Huawei’s software construction processes, which were so flawed that they were unable to build software consistently anywhere. The software being studied is for today’s 4G network infrastructure, and the joint GCHQ-Huawei Centre concluded that it would take them several years even to reach today’s software engineering state-of-the-art. It seems inconceivable that Huawei will be able to produce usable network infrastructure for an environment that will be many times more complex.

These two problems, in a way, cancel each other out – if the network infrastructure is of poor quality it probably can’t be manipulated explicitly by Huawei. But its poor quality increases the opportunity for attacks on networks by China (without involving Huawei), Russia, Iran, or even terrorist groups.

Huawei systems are cheaper than their competitors, and it’s a truism that convenience trumps security. But the long-term costs of a Huawei connected world may be more than we want to pay.

Tips for adversarial analytics

I put togethers this compendium of thngs that are useful to know for those starting out in analytics for policing, signals intelligence, counterterrorism, anti-money-laundering, cybersecurity, and customs; and which might be useful to those using analytics when organisational priorities come into conflict with customers (as they almost always do).

Most of the content is either tucked away in academic publications, not publishable by itself, or common knowledge among practitioners but not written down.

I hope you find it helpful (pdf):  Tips for Adversarial Analytics

China-Huawei-Canada fail

Huawei has been trying to convince the world that they are a private company with no covert relationships to the Chinese government that might compromise the security of their products and installations.

This attempt has been torpedoed by the Chinese ambassador to Canada who today threatened ‘retaliation’ if Canada joins three of the Five Eyes countries (and a number of others) in banning Huawei from provisioning 5G networks. (The U.K. hasn’t banned Huawei equipment, but BT is uninstalling it, and the unit set up jointly by Huawei and GCHQ to try to alleviate concerns about Huawei’s hardware and software has recently reported that it’s less certain about the security of these systems now than it was when the process started.)

It’s one thing for a government to act as a booster for national industries — it’s another to deploy government force directly.

China seems to have a tin ear for the way that the rest of the world does business; it can’t help but hurt them eventually.

The cybercrime landscape in Canada

Statscan recently released the results of their survey of cybercrime and cybersecurity in 2017 (https://www150.statcan.gc.ca/n1/pub/71-607-x/71-607-x2018007-eng.htm).

Here are some the highlights:

  • About 20% of Canadian businesses had a cybersecurity incident (that they noticed). Of these around 40% had no detectable motive, another 40% were aimed at financial gain, and around 23% were aimed at getting information.
  • More than half had enough of an impact to prevent the business operating for at least a day.
  • Rates of incidents were much higher in the banking sector and pipeline transportation sector (worrying), and in universities (not unexpected, given their need to operate openly).
  • About a quarter of businesses don’t use an anti-malware tool, about a quarter do not have email security (not clear what this means, but presumably antivirus scanning of incoming email, and maybe exfiltration protection), and almost a third do not have network security. These are terrifying numbers.

Relatively few businesses have a policy re managing and reporting cybersecurity incidents; vanishingly few have senior executive involvement in cybersecurity.

It could be worse, but this must be disappointing to those in the Canadian government who’ve been developing and pushing out cyber awareness.

People in glass houses

There’s a throwaway line in Woodward’s book about the Trump White House (“Fear”, Simon and Schuster, 2018) where he says that the senior military were unwilling to carry out offensive cyber-offensive operations because they didn’t think the US would fare well under retaliation.

Then this week the GAO came out with a report on cybersecurity in DOD weapons systems (as opposed to DOD networks). It does not make happy reading. (Full report).

Here’s what seems to me to be the key quotation:

“We found that from 2012 to 2017, DOD testers routinely found mission critical cyber vulnerabilities in nearly all weapon systems that were under development. Using relatively simple tools and techniques, testers were able to take control of these systems and largely operate undetected”

Almost every word could be italicized and many added exclamation marks would hardly suffice.

To be fair, some of these systems are still under development. But the report makes clear that, for many of them, cybersecurity was not really considered in their design. The typical assumption was that weapons systems are standalone. But in a world where software runs everything, there has to be a mechanism for software updates at least, and so a connection to the outside world. As the Iranians discovered, even update from a USB is not attack-proof. And security is a difficult property to retrofit, so these systems will never be as cyberattack resistant as we might all have wished.

Lessons from Wannacrypt and its cousins

Now that the dust has settled a bit, we can look at the Wannacrypt ransomware, and the other malware  that are exploiting the same vulnerability, more objectively.

First, the reason that this attack vector existed is because Microsoft, a long time ago, made a mistake in a file sharing protocol. It was (apparently) exploited by the NSA, and then by others with less good intentions, but the vulnerability is all down to Microsoft.

There are three pools of vulnerable computers that played a role in spreading the Wannacrypt worm, as well as falling victim to it.

  1. Enterprise computers which were not being updated in a timely way because it was too complicated to maintain all of their other software systems at the same time. When Microsoft issues a patch, bad actors immediately try to reverse engineer it to work out what vulnerability it addresses. The last time I heard someone from Microsoft Security talk about this, they estimated it took about 3 days for this to happen. If you hadn’t updated in that time, you were vulnerable to an attack that the patch would have prevented. Many businesses evaluated the risk of updating in a timely way as greater than the risk of disruption because of an interaction of the patch with their running systems — but they may now have to re-evaluate that calculus!
  2. Computers running XP for perfectly rational reasons. Microsoft stopped supporting XP because they wanted people to buy new versions of their operating system (and often new hardware to be able to run it), but there are many, many people in the world for whom a computer running XP was a perfectly serviceable product, and who will continue to run it as long as their hardware keeps working. The software industry continues to get away with failing to warrant their products as fit for purpose, but it wouldn’t work in other industries. Imagine the discovery that the locks on a car stopped working after 5 years — could a manufacturer get away with claiming that the car was no longer supported? (Microsoft did, in this instance, release a patch for XP, but well after the fact.)
  3. Computers running unregistered versions of Microsoft operating systems (which therefore do not get updates). Here Microsoft is culpable for an opposite reason. People can run an unregistered version for years and years, provided they’re willing to re-install it periodically. It’s technically possible to prevent (or make much more difficult) this kind of serial illegality.

The analogy is with public health. When there’s a large pool of unvaccinated people, the risk to everyone increases. Microsoft’s business decisions make the pool of ‘unvaccinated’ computers much larger than it needs to be. And while this pool is out there, there will always be bad actors who can find a use for the computers it contains.

Secrets and authentication: lessons from the Yahoo hack

Authentication (I’m allowed to access something or do something) is based on some kind of secret. The standard framing of this is that there are three kinds of secrets:

  1. Something I have (like a device that generates 1-time keys)
  2. Something I am (like a voiceprint or a fingerprint), or
  3. Something I know (like a password).

There are problems with the first two mechanisms. Having something (a front door key) is the way we authenticate getting into our houses and offices, but it doesn’t transfer well to the digital space. Being something looks like it works better but suffers from the problem that, if the secret becomes widely known, there’s often no way to change the something (“we’ll be operating on your vocal cords to change your voice, Mr. Smith”). Which is why passwords tend to be the default authentication mechanism.

At first glance, passwords look pretty good. I have a secret, the password, and the system I’m authenticating with has another secret, the encrypted version of the password. Unfortunately, the system’s secret isn’t very secret because the encrypted version of my password is almost always transmitted in clear because of the prevalence of wifi. Getting from the system’s secret to mine is hard, which is supposed to prevent reverse engineering my secret from the system’s.

The problem is that the space of possible passwords is small enough that the easy mapping, from my secret to the system’s, can be tried for all strings of reasonable length. So brute force enables the reverse engineering that was supposed to be hard. Making passwords longer and more random helps, but only at the margin.

We could instead make the secret a function instead of a string. As the very simplest example, the system could present me with a few small integers, and my authentication would be based on knowing that I’m supposed to add the first two and subtract the third. My response to the system is the resulting value. Your secret might be to add the first and the third and ignore the second.

But limitations on what humans can compute on the fly means that the space of functions can’t actually be very large, so this doesn’t lead to a practical solution.

Some progress can be made by insisting that both I and the system must have different secrets. Then a hack of either the system or of me by phishing isn’t enough to gain access to the system. There are a huge number of secret sharing schemes of varying complexity. But for the simplest example, my secret is a binary string of length n, and the system’s secret is another binary string of length n. We exchange encrypted versions of our strings, and the system authenticates me if the exclusive-or of its string and mine has a particular pattern. Usefully, I can also find out if the system is genuine by carrying out my own check. This particular pattern is (sort of) a third secret, but one that neither of us have to communicate and so is easier to protect.

This system can be broken, but it requires a brute force attack on the encrypted version of my secret, the encrypted version of the system’s secret, and then working out what function is applied to merge the two secrets (xor here, but it could be something much more complex). And that still doesn’t get access to the third secret.

Passwords are the dinosaurs of the internet age; secret sharing is a reasonable approach for the short to medium term, but (as I’ve argued here before) computing in compromised environments is still the best hope for the longer term.