Posts Tagged 'NASA'

Huawei’s new problem

The Huawei Cyber Security Evaluation Centre (HCSEC) is a joint effort, between GCHQ and Huawei, to increase confidence in Huawei products for use in the UK. It’s been up and running since 2013.

In its 2018 report, the focus was on issues of replicable builds. Binaries compiled in China were not the same size as binaries built in the UK. To a computer scientist, this is a bad sign since it suggests that the code contains conditional compilation statements such as:

If country_code == UK

insert backdoor

In the intervening year, they have dug into this issue, and the answer they come up with is unexpected. It turns out that the problem is not a symptom of malice, but a symptom of incompetence. The code is simply not well enough engineered to produce consistent results.

Others have discussed the technical issues in detail:

https://www.theregister.co.uk/2019/03/28/hcsec_huawei_oversight_board_savaging_annual_report/

but here are some quotes from the 2019 report:

“there remains no end-to-end integrity of the products as delivered by Huawei and limited confidence on Huawei’s ability to understand the content of any given build and its ability to perform true root cause analysis of identified issues. This raises significant concerns about vulnerability management in the long-term”

“Huawei’s software component management is defective, leading to higher vulnerability rates and significant risk of unsupportable software”

“No material progress has been made on the issues raised in the
previous 2018 report”

“The Oversight Board continues to be able to provide only limited
assurance that the long-term security risks can be managed in the
Huawei equipment currently deployed in the UK”

Not only is the code quality poor, but they see signs of attempts to cover up the shortcuts and practices that led to the issue in the first place.

The report is also scathing about Huawei’s efforts/promises to clean up its act; and they estimate a best case timeline of 5 years to get to well-implemented code.

5G (whatever you take that to mean) will be at least ten times more complex than current networking systems. I think any reasonable computer scientist would conclude that Huawei will simply be unable to build such systems.

Canada, and some other countries, are still debating whether or not to ban Huawei equipment. This report suggests that such decisions can be depoliticised, and made based purely on economic grounds.

But, from a security point of view, there’s still an issue — the apparently poor quality of Huawei software creates a huge threat surface that can be exploited by the governments of China (with or without Huawei involvement), Russia, Iran, and North Korea, as well as non-state actors and cyber criminals.

(Several people have pointed out that other network multinationals have not been scrutinised at the same depth and, for all we know, they may be just as bad. This seems to me implausible. One of the unsung advantages that Western businesses have is the existence of NASA, which has been pioneering reliable software for 50 years. If you’re sending a computer on a one-way trip to a place where no maintenance is possible, you pay a LOT of attention to getting the software right. The ideas and technology developed by NASA have had an influence in software engineering programs in the West that has tended to raise the quality of all of the software developed there. There have been unfortunate lapses, whenever the idea that software engineering is JUST coding becomes popular (Windows 95, Android apps) but overall the record is reasonably good. Lots better than the glimpse we get of Huawei, anyway.)

Time to build an artificial intelligence (well, try anyway)

NASA is a divided organisation, with one part running uncrewed missions, and the other part running crewed missions (and, apparently, internal competition between these parts is fierce). There’s no question that the uncrewed-mission part is advancing both science and solar system exploration, and so there are strong and direct arguments for continuing in this direction. The crewed-mission part is harder to make a case for; the ISS spends most of its time just staying up there, and has been reduced to running high-school science experiments. (Nevertheless, there is a strong visceral feeling that humans ought to continue to go into space, which is driving the various proposals for Mars.)

But there’s one almost invisible but extremely important payoff from crewed missions (no, it’s not Tang!): reliability. You only have to consider the differences in the level of response between the failed liftoff last week of a supply rocket to the ISS, and the accidents involving human deaths. When an uncrewed rocket fails, there’s an engineering investigation leading to a signficant improvement in the vehicle or processes. When there’s a crewed rocket failure, it triggers a phase shift in the whole of NASA, something that’s far more searching. The Grissom/White/Chaffee accident triggered a complete redesign of the way life support systems worked; the Challenger disaster caused the entire launch approval process to be rethought; and the Columbia disaster led to a new regime of inspections after reaching orbit. Crewed missions cause a totally different level of attention to safety which in turn leads to a new level of attention to reliability.

Some changes only get made when there’s a really big, compelling motivation.

Those of us who have been flying across oceans for a while cannot help but be aware that 4-engined aircraft have almost entirely been replaced by 2-engined aircraft; and that spare engines strapped underneath the wing for delivery to a remote location to replace an engine that failed are never seen either. This is just one area where increased reliability has paid off. Cars routinely last for hundreds of thousands of kilometres, which only as few high-end brands used to do forty years ago. There are, of course, many explanations, but the crewed space program is part of the story.

What does this have to do with analytics? I think the time is right to try to build an artificial intelligence. The field known as “artificial intelligence” or machine learning or data mining or knowledge discovery is notorious for overpromising and underdelivering since at least the 1950s so I need to be clear: I don’t think we actually know enough to build an artificial intelligence — but I think it has potential as an organising principle for a large amount of research that is otherwise unfocused.

One reason why I think this might be the right time is that deep learning has shown how to solve (at least potentially) some of the roadblock problems (although I don’t know that the neural network bias in much deep learning work is necessary). Consciousness, which was considered an important part of intelligence, has turned out to be a bit of a red herring. Of course, we can’t tell what the next hard part of the problem will turn out to be.

The starting problem is this: how can an algorithm identify when it is “stuck” and “go meta”? (Yes, I know that computability informs us that there isn’t a general solution but we, as humans, do this all the time — so how can we encode this mechanism algorithmically, at least as an engineering strategy?) Any level of success here would payoff immediately in analytics platforms that are really good at building models, and even assessing them, but hopeless at knowing when to revise them.