back to

Is panic over robots and artificial intelligence justified?


After calling 2015 the year that “robot panic peaked,” an article in The Observer asks several computer experts whether fear over robots taking our jobs and eventually enslaving the human race is justified. Soberingly, a number of prominent tech minds, including Elon Musk and Steve Wozniak, say we have good reason to be worried. An excerpt:

But robots that kill – especially “intelligent” ones – are very much on the mind of those who worry most publicly about the AI-robot combination. Stephen Hawking told the BBC it “could spell the end of the human race” as it took off on its own and redesigned itself at an ever-increasing rate. Elon Musk, the billionaire who brought us PayPal and the Tesla car, called AI “our biggest existential threat”. Steve Wozniak, the co-founder of Apple, told the Australian Financial Review in March that “computers are going to take over from humans, no question” and that he now agreed with Hawking and Musk that “the future is scary and very bad for people… eventually [computers will] think faster than us and they’ll get rid of the slow humans to run companies more efficiently”.

Nick Bostrom may not have a similar claim to fame, but he is an Oxford University philosopher who argues in his book Superintelligence that self-improving AI could enslave or kill humans if it wanted to, and that controlling such machines could be impossible. But there’s no sign so far of inherently intelligent killer robots, or “anthropogenic AI”, as it’s also called. Reviewing Bostrom’s book, the scientist Edward Moore Geist suggested that it “is propounding a solution that will not work to a problem that probably does not exist”.


i wrote an essay attempting to deconstruct the notion that machines will some how be sentient, and aggressive at that. i think it is more or less a reflection of the exasperation of the enlightenment worldview, which at its limits, wherein hawking and his cronies could be replaced by cousins of HAL-9000, come up against existential enigmas that cannot be thought through the bare life of bios produced by the anthropological machine of modern science. it’s a bit rough, needs edited and what not, but i think it’s still worth a read.


This is an article on that I really enjoyed reading. It’s written by Sean Miller and titled “Bill Gates and Elon Musk are wrong: Artificial intelligence is not going to take over the world”. Miller argues a more adequate term for AI is cognitive prosthetics, and illuminates aspects as autonomy, autopoiesis, sentience as dependent on social interaction amongst others, pointing out that bacteria (in these respects) supersede AI in it’s current and near future forms.

“Unlike AI, though, bacteria are autonomous. They locomote, consume and proliferate all on their own. They exercise true agency in the world. Accordingly, to be more precise, we should say that bacteria are more than just autonomous, they’re autopoietic, self-made. Despite their structural complexity, they require no human intervention or otherwise to evolve.”


“AI augments people’s thinking, their goals, drives and prejudices. I say “people” and not “human” because I don’t want to imply a universality. Particular instances of AI, which is essentially computer code, are written and executed on computer networks by people in specific social contexts. An AI’s drives are a reflection of the idiosyncrasies of the individuals who write its code.”


Perhaps a more plausible fear than robots “taking over” is simply that AI/automation/cognitive prosthetics will be used to extend and intensify the forms of domination that already exist, e.g., labor exploitation, racist oppression, etc. The technology may be new, but the ends to which it is being devoted are not.