Tech giants 'putting world at risk of a killer AI'

Despite calls for international ban on lethal robots, companies like Microsoft are still pushing for their development

The invention of an artificial super-intelligence has been a central theme in science fiction since at least the 19th century. From E.M. Forster's short story The Machine Stops (1909) to the recent HBO television series Westworld, writers have tended to portray this possibility as an unmitigated disaster. 

And Amazon, Microsoft and Intel are among leading tech companies putting the world nowadays at risk through killer robot development, according to a report that surveyed major players from the sector about their stance on lethal autonomous weapons.

Dutch NGO Pax ranked 50 companies by three criteria: whether they were developing technology that could be relevant to deadly AI, whether they were working on related military projects, and if they had committed to abstaining from contributing in the future.

According to study’s results twenty-two companies were of "medium concern," while 21 fell into a "high concern" category, notably Amazon and Microsoft who are both bidding for a $10bn Pentagon contract to provide the cloud infrastructure for the US military. Others in the "high concern" group include Palantir, a company with roots in a CIA-backed venture capital organization that was awarded an $800m contract to develop an AI system "that can help soldiers analyse a combat zone in real time."

In the meantime, Google was among the seven companies found to be engaging in "best practice" in the analysis that spanned 12 countries, as was Japan's Softbank, best known for its humanoid Pepper robot. Google’s good place comes after last year the company declined to renew a Pentagon contract called Project Maven, which used machine learning to distinguish people and objects in drone videos. It also dropped out of the running for Joint Enterprise Defense Infrastructure (JEDI), the cloud contract that Amazon and Microsoft are hoping to bag and published guiding principles eschewing AI for use in weapons systems.

"Why are companies like Microsoft and Amazon not denying that they're currently developing these highly controversial weapons, which could decide to kill people without direct human involvement?" Frank Slijper, lead author of the report asked.

The use of AI to allow weapon systems to autonomously select and attack targets has sparked ethical debates in recent years, with critics warning they would jeopardise international security and herald a third revolution in warfare after gunpowder and the atomic bomb. A panel of government experts even debated policy options regarding lethal autonomous weapons at a meeting of the United Nations Convention on Certain Conventional Weapons in Geneva on 21 August.

"Autonomous weapons will inevitably become scalable weapons of mass destruction, because if the human is not in the loop, then a single person can launch a million weapons or a hundred million weapons," Stuart Russell, a computer science professor at the University of California, Berkeley told AFP in an interview.

"The fact is that autonomous weapons are going to be developed by corporations, and in terms of a campaign to prevent autonomous weapons from becoming widespread, they can play a very big role," he added.

So what these killer machines might look like? According to Russell, "anything that's currently a weapon,” but is developed to be autonomous - whether it's tanks, fighter aircraft, or submarines.

Israel's Harpy, for example, is an autonomous drone that already exists, "loitering" in a target area and selecting sites to hit. More worrying, however, are the new categories of autonomous weapons that don't yet exist - these could include armed mini-drones like those featured in the 2017 short film "Slaughterbots."

"With that type of weapon, you could send a million of them in a container or cargo aircraft - so they have destructive capacity of a nuclear bomb but leave all the buildings behind," said Russell.

Using facial recognition technology, the drones could "wipe out one ethnic group or one gender, or using social media information you could wipe out all people with a political view."

Thus, Russell argued it was essential to take the next step in the form of an international ban on lethal AI, that could be summarized as "machines that can decide to kill humans shall not be developed, deployed, or used."

Similar articles