Tech leaders like Elon Musk, Google Deepmind’s co-founders, Skype’s founder and more than 2,400 individuals from 160 companies signed a pledge to not make AI-powered weapons. The signatories were present at the International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm organised by the Future of Life Institute.
The signatories who took the pledge represent over 90 countries and urged governments to pass stringent laws against such autonomous weapons. The signatories include prominent AI researchers including Yoshua Bengio, Stuart Russel, and Jürgen Schmidhuber. Also, three of Deepmind’s founders Shane Legg, Mustafa Suleyman and Demin Hassabis, Skype founder Jaan Talinn, and Tesla and SpaceX CEO Elon Musk have taken the pledge.
The pledge states, “Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems.”
“The decision to take a human life should never be delegated to a machine,” the pledge adds.
The pledge comes after a strong backlash against a handful of companies over using their technologies to aid government agencies and law enforcement groups. Google’s Project Maven was criticised for assisting US Intelligence agencies by providing AI technology to help them parse drone images that require additional human interference. Similarly, Microsoft is facing a backlash for assisting Immigration and Customs Enforcement (ICE) of the United States, while Amazon is being called out for sharing its face recognition technology with law enforcement agencies.
As a reaction against the backlash for Project Maven, Google released a set of principles that will guide the company’s ethics of using AI technology. The policy explicitly states that Google won’t use its AI technology in designing and deploying weapons, or for surveillance. The policy also states Google won’t provide its AI technology to things “whose purpose contravenes widely accepted principles of international law and human rights.”
Microsoft’s partnership with the Immigration and Customs Enforcement, according to the company, is limited to email, calendar, messaging and document management. It doesn’t include facial recognition technology. Instead, Microsoft is reportedly working on its own guidelines of using facial recognition.
However, even with the internet giants taking a pledge to not dabble in lethal autonomous weapons or LAWS, there is no way to enforce a law on that. AI weapons are already there and majority of the technology is developed by global superpowers including US and China. There’s also no agreement among the Western democratic states that has earlier been critical in banning chemical weapons. Countries like China have deployed its AI technology in building a social credit system for citizens that discriminate among the countries people based on their social activities.
Having said that, even though there is no definite action by governments to curb autonomous weapons, collective efforts of the developers of AI technology can also bear fruitful results.