Select date

May 2024
Mon Tue Wed Thu Fri Sat Sun

Top Researchers Write 100-Page Report Warning About AI Threat to Humanity

21-2-2018 < Blacklisted News 164 331 words
 

When we think about artificial intelligence, we tend to think of the humanized representations of machine learning like Siri or Alexa, but the truth is that AI is all around us, mostly running as a background process. This slow creep of AI into everything from medicine to finance can be hard to appreciate if for no other reason than it looks a lot different than the AI dreamt up by Hollywood in films like Deus Ex Machina or Her. In fact, most ‘artificial intelligence’ today is quite stupid when compared to a human—a machine learning algorithm might be able to wallop a human at a specific task, such as playing a game of Go, and still struggle at far more mundane tasks like telling a turtle apart from a gun.


Nevertheless, a group of 26 leading AI researchers met in Oxford last February to discuss how superhuman artificial intelligence may be deployed for malicious ends in the future. The result of this two-day conference was a sweeping 100-page reportpublished today that delves into the risks posed by AI in the wrong hands, and strategies for mitigating these risks.


One of the four high-level recommendations made by the working group was that “researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.”


This recommendation is particularly relevant in light of the recent rise of “deepfakes,” a machine learning method mostly used to swap Hollywood actresses’ faces onto porn performers’ bodies. As first reported by Motherboard’s Sam Cole, these deep fakes were made possible by adapting an open source machine learning library called TensorFlow, originally developed by Google engineers. Deepfakes underscores the dual use nature of machine learning tools and also raises the question of who should have access to these tools.


Read More...


Print