Skip to content

The next best thing

December 3, 2014
A Gladiator Tactical Unmanned Ground Vehicle at Redstone Arsenal. Photo: US Army.

A Gladiator Tactical Unmanned Ground Vehicle at Redstone Arsenal. Photo: US Army.

Sadly, we humans may never agree on an international ban on autonomous weapons.

Even a universal convention on the use of artificial intelligence drones in warfare may be to much to ask for.

And, alas, Asimov’s famous three laws will probably never leave the realm of science fiction. Three treaties may well be our most realistic hope of regulating the robots arms race.

«If you look closely at international law, it doesn’t have anything to say about artificial intelligence and autonomous weapons. That’s a problem» – John Frank Weaver in Slate.

If you have followed this blog for some time, you have no doubt noticed that I am quite fond of robots. I think they are fun, exiting, interesting and incredibly useful things. I really believe that the world needs more robots. On the other hand, I really don’t like killer robots. If there is one thing the world really does not need, it is a bunch of weapon wielding robots with the ability to autonomously target and kill human beings.

As Ryan Gariepy, co-Founder and CTO of Clearpath Robotics, stated in an open letter:

«No nation in the world is ready for killer robots – technologically, legally, or ethically. More importantly, we see no compelling justification that this technology needs to exist in human hands.»

Like Gariepy, I really believe that the development of killer robots – also known as autonomous weapons – is both unwise and unethical. Therefore, I am also a firm supporter of an international ban on the production and use of autonomous weapons.

Alas, the militaries of the world might not be willing to give up their new toys. With military research agencies spending millions and millions of dollars on the development of military robots and robotic systems, a treaty that renders many of these robots all but useless to their sponsors is unlikely to be very popular with many of the world’s military powers.

Sure enough, the European Parliament has passed a resolution calling for a ban on the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention. However, a resolution is neither a law nor an international treaty. Also, the European Parliament agreeing on a resolution is one thing, getting military powers like Iran, China, Israel, Russia and the US to sign a treatment is something else entirely.

An autonomous RQ-A Firestorm helicopter landing on an aircraft carrier. Photo: US Navy.

An autonomous RQ-A Firestorm helicopter landing on an aircraft carrier. Photo: US Navy.

Three treaties to rule them all?

«Ideally, the nations of the Earth would gather to sign one big comprehensive treaty, the “United Nations Convention on the Use of Artificial Intelligence Drones in Warfare. Unfortunately, that is unlikely,» John Frank Weaver writes In a Future Tense article in Slate.

Weaver is an attorney working on artificial intelligence law, and the author of the book Robots Are People, Too. He probably knows what he’s writing about. Sadly.

Still, even though the picture Weaver paints in his article is a bleak one, it isn’t without its glimmers of hope. While one convention banning or strictly limiting artificial intelligence drones may be out of reach, Weaver believes a lot can be done with multiple conventions addressing particular aspects of AI drones use. In the Slate article, he proposes three such treaties:

  1. A Treaty on the Testing and Operational Standards of Artificial Intelligence Drones Intended for Combat. This treaty would regulate the development of internationally acceptable AI drones and establish the necessary tests, failsafe mechanisms and standards ensuring safer performance by those autonomous robots.
  2. A Treaty on the Liability of Artificial Intelligence Drones. This treaty would make nations responsible for the actions of their AI drones in the same way they are liable for the actions of their military personnel. It should also provide training requirements for both AI drones and the humans who oversee them.
  3. A Treaty on the Use of Artificial Intelligence Drones in Combat. Weaver intends this treaty to provide a high moral standard for all nations, even the ones that do not become parties to the treaty. He hopes that it could work like the convention on anti-personnel mines, creating a moral high grounds on autonomous weapons, and acting as a brake even on nations that do not sign it.

While I still prefer the «United Nations Convention on the Use of Artificial Intelligence Drones in Warfare», I have to admit that Weaver’s three treaties may be a tad more realistic, at least in the short term. And, in the case of autonomous weapons, we may have to settle for the next best thing. Doing nothing would mean letting armed robots, weapons manufacturers, military commanders and more or less legitimate heads of state operate in an international legal vacuum.

Now, that doesn’t sound awfully safe, does it?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: