e-flux Conversations has been closed to new contributions and will remain online as an archive. Check out our new platform for short-form writing, e-flux Notes.

e-flux conversations

Top Researchers Outline the Malicious Potential of AI Technology

As Daniel Oberhaus writes for Motherboard, a group of top AI researchers has published a major report about the future potential for AI technology to be used for nefarious purposes, from hacking to surveillance to the takeover of automated systems like the ones that control self-driving cars. Check out an excerpt of the article below, or the full piece here.

A group of 26 leading AI researchers met in Oxford last February to discuss how superhuman artificial intelligence may be deployed for malicious ends in the future. The result of this two-day conference was a sweeping 100-page report published today that delves into the risks posed by AI in the wrong hands, and strategies for mitigating these risks.

One of the four high-level recommendations made by the working group was that “researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.”

This recommendation is particularly relevant in light of the recent rise of “deepfakes,” a machine learning method mostly used to swap Hollywood actresses’ faces onto porn performers’ bodies. As first reported by Motherboard’s Sam Cole, these deep fakes were made possible by adapting an open source machine learning library called TensorFlow, originally developed by Google engineers. Deepfakes underscores the dual use nature of machine learning tools and also raises the question of who should have access to these tools.

Although deepfakes is specifically mentioned in the report, the researchers also highlight the use of similar techniques to manipulate videos of world leaders as a threat to political security, one of the three threat domains considered by the researchers. One need only imagine a fake video of Trump declaring war on North Korea surfacing in Pyongyang and the fallout that would result to understand what is at stake. Moreover, the researchers saw the use of AI to enable unprecedented levels of mass surveillance through data analysis and mass persuasion through targeted propaganda as other areas of political concern.