e-flux Conversations has been closed to new contributions and will remain online as an archive. Check out our new platform for short-form writing, e-flux Notes.

e-flux conversations

Machine learning is not even close to resembling human intelligence

We are a long way from being enslaved by robot overlords, at least according to Paul Taylor, who has published an article on the history and present of machine learning in the London Review of Books. Taylor outlines the many complex (and expensive) approaches that scientists have used in recent decades to get computers to learn and think like humans. While they have made great strides in developing machines that can perform rule-based tasks—like playing chess or driving a car—Taylor says they have a long way to go before they can duplicate the brilliance of the human mind. Here’s an excerpt from Taylor’s article:

The solving of problems that until recently seemed insuperable might give the impression that these machines are acquiring capacities usually thought distinctively human. But although what happens in a large recurrent neural network better resembles what takes place in a brain than conventional software does, the similarity is still limited. There is no close analogy between the way neural networks are trained and what we know about the way human learning takes place. It is too early to say whether scaling up networks like Inception will enable computers to identify not only a cat’s face but also the general concept ‘cat’, or even more abstract ideas such as ‘two’ or ‘authenticity’. And powerful though Google’s networks are, the features they derive from sequences of words are not built from the experience of human interaction in the way our use of language is: we don’t know whether or not they will eventually be able to use language as humans do.

In 2006 Ray Kurzweil wrote a book about what he called the Singularity, the idea that once computers are able to generate improvements to their own intelligence, the rate at which their intelligence improves will accelerate exponentially. Others have aired similar anxieties. The philosopher Nick Bostrom wrote a bestseller, Superintelligence (2014), examining the risks associated with uncontrolled artificial intelligence. Stephen Hawking has suggested that building machines more intelligent than we are could lead to the end of the human race. Elon Musk has said much the same. But such dystopian fantasies aren’t worth worrying about yet. If there is something to be worried about today, it is the social consequences of the economic transformation computers might bring about – that, and the growing dominance of the small number of corporations that have access to the mammoth quantities of computing power and data the technology requires.