Posts Tagged 'artificial intelligence'

What is the “artificial intelligence” that is all over the media?

Anyone who’s been paying attention to the media will be aware that “artificial intelligence” is a hot topic. And it’s not just the media — China recently announced a well-funded quasi-industrial campus devoted to “artificial intelligence”.

Those of us who’ve been around for a while know that “artificial intelligence” becomes the Next Big Thing roughly every 20 years, and so are inclined to take every hype wave with a grain (or truckload) of salt. Building something algorithmic that could genuinely be considered intelligent is much, much, much harder than it looks, and I don’t think we’re even close to solving this problem. Not that we shouldn’t try, as I’ve argued earlier, but mostly for the side-effects a try will spin off.

So what is it that the media (etc.) think has happened that’s so revolutionary? I think there are two main things:

  1. There’s no question that deep learning has made progress at solving some problems that have been intractable for a while, notably object recognition and language manipulations. So it’s not surprising that there’s a sense that suddenly “artificial intelligence” has made progress. However, much of this progress has been over-hyped by the businesses doing it. For example, object recognition has this huge, poorly understood weakness that a pair of images that look to us identical can produce the correct identification of the objects they contain from one image, and complete garbage from the other, using the same algorithm. In other words, the continuity between images that somehow “ought” to be present when there are only minor pixel level changes is not detected properly by the algorithmic object recogniser. In the language space, word2vec doesn’t seem to do what it’s claimed to do, and the community has had trouble reproducing some of the early successes.
  2. Algorithms are inserting themselves into decision making settings which previously required humans in the loop. When machines supplemented human physical strength, there was an outcry about the takeover of the machines; when algorithms supplemented human mental abilities, there was an outcry about the takeover of the machines; and now that algorithms are controlling systems without humans, there’s a fresh outcry. Of course, this isn’t as new as it looks — autopilots have been flying planes better than human pilots for a while, and automated train systems are commonplace. But driverless vehicles have pushed these abilities into public view in a new, forceful way, and it’s unsettling. And militaries keep talking about automated warfare (although this seems implausible given current technology).

My sense is that these developments are all incremental, from a technical perspective if not from a social one. There are interesting new algorithms, many of them simply large-scale applications of heuristics, but nothing that qualifies as a revolution. I suspect that “artificial intelligence” is yet another bubble, as it was in the 60s and in the 80s.

Time to build an artificial intelligence (well, try anyway)

NASA is a divided organisation, with one part running uncrewed missions, and the other part running crewed missions (and, apparently, internal competition between these parts is fierce). There’s no question that the uncrewed-mission part is advancing both science and solar system exploration, and so there are strong and direct arguments for continuing in this direction. The crewed-mission part is harder to make a case for; the ISS spends most of its time just staying up there, and has been reduced to running high-school science experiments. (Nevertheless, there is a strong visceral feeling that humans ought to continue to go into space, which is driving the various proposals for Mars.)

But there’s one almost invisible but extremely important payoff from crewed missions (no, it’s not Tang!): reliability. You only have to consider the differences in the level of response between the failed liftoff last week of a supply rocket to the ISS, and the accidents involving human deaths. When an uncrewed rocket fails, there’s an engineering investigation leading to a signficant improvement in the vehicle or processes. When there’s a crewed rocket failure, it triggers a phase shift in the whole of NASA, something that’s far more searching. The Grissom/White/Chaffee accident triggered a complete redesign of the way life support systems worked; the Challenger disaster caused the entire launch approval process to be rethought; and the Columbia disaster led to a new regime of inspections after reaching orbit. Crewed missions cause a totally different level of attention to safety which in turn leads to a new level of attention to reliability.

Some changes only get made when there’s a really big, compelling motivation.

Those of us who have been flying across oceans for a while cannot help but be aware that 4-engined aircraft have almost entirely been replaced by 2-engined aircraft; and that spare engines strapped underneath the wing for delivery to a remote location to replace an engine that failed are never seen either. This is just one area where increased reliability has paid off. Cars routinely last for hundreds of thousands of kilometres, which only as few high-end brands used to do forty years ago. There are, of course, many explanations, but the crewed space program is part of the story.

What does this have to do with analytics? I think the time is right to try to build an artificial intelligence. The field known as “artificial intelligence” or machine learning or data mining or knowledge discovery is notorious for overpromising and underdelivering since at least the 1950s so I need to be clear: I don’t think we actually know enough to build an artificial intelligence — but I think it has potential as an organising principle for a large amount of research that is otherwise unfocused.

One reason why I think this might be the right time is that deep learning has shown how to solve (at least potentially) some of the roadblock problems (although I don’t know that the neural network bias in much deep learning work is necessary). Consciousness, which was considered an important part of intelligence, has turned out to be a bit of a red herring. Of course, we can’t tell what the next hard part of the problem will turn out to be.

The starting problem is this: how can an algorithm identify when it is “stuck” and “go meta”? (Yes, I know that computability informs us that there isn’t a general solution but we, as humans, do this all the time — so how can we encode this mechanism algorithmically, at least as an engineering strategy?) Any level of success here would payoff immediately in analytics platforms that are really good at building models, and even assessing them, but hopeless at knowing when to revise them.