The First Law
Isaac Asimov’s famous First Law of Robotics is now one small, but important step closer to becoming reality, as the European Parliament adopts a resolution calling for a ban on fully autonomous weapons.
The First Law of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
– Isaac Asimov
Isaac Asimov first introduced his Three Laws of Robotics back in 1942 – two decades before the first modern, computer controlled robot saw the light of day. Today, there are millions of robots all around us. But the Three Laws have still not been adopted by any of the countless states involved in the design, production and use of robots. The reason for this is pretty simple: Asimov were neither politician, bureaucrat nor lawyer. He was an author, and his Three Laws were first introduced, not in a motion, resolution or legal draft, but in a science fiction short story called Runaround.
Most roboticists and legislators agree that we do not yet need laws for robots. However, many scientists, activists and other conscious, thinking human beings agree that we do need laws about robots. Laws for all the people designing, programming, building and using this useful, exciting and potentially ground-breaking technology.
When I use the word «ground-breaking», that can be taken quite literally. Ground-breaking as in earth-shattering. As in firing a missile through the roof of an underground bunker. Or an earthen hut. The armies of the world employ thousands of robots today. The purpose of most of them is to make the lives of soldiers safer. They gather intelligence, and safely dispose of improvised explosive devices and unexploded ammunition. On the other hand, quite a few hundred robots are designed, equipped and used for quite the opposite purpose: Taking human lives.
Simple and stupid
Today, most armed robots are rather simple, and incredibly stupid. Also, the armies of today do not allow their robots to fire at will. Many of these robots can move about autonomously. Some of them are even capable of acquiring their own targets and aiming their weapons. But they are not allowed to fire them on their own. In the end, an actual human being has to pull the remote control trigger. A human, not a machine, has to make the final, irrevocable decision to fire a missile and kill a human being.
This is status quo. However, as the world’s largest armies and navies are pouring millions of dollars into robot research, the nightmare of fully autonomous weapons is drawing ever nearer. Both the US, China and the UK are working on more than less autonomous fighter planes, and a number of states are developing autonomous land and sub-sea combat vehicles. It seems like an army of robot warriors, fully capable of autonomously selecting their targets and attacking them with lethal force, may be coming soon to a battlefield near you.
In the world of robotics, technology progresses incredibly fast. So fast, in fact, that law, politics and ethics have not been able to keep up. Or, rather, the legislators haven’t. Unlike Asimov, lawmakers and politicians have been unable to look into the future. Unwilling to face the ethical challenges posed by technological advances. Therefore, we have absolutely no legal guarantees that autonomous, armed robots will not be allowed to call their own shots in the relatively near future. No Three Laws of robotics stand between us and killer robots. Not even one.
Fortunately, this is about to change. On 27. February, the European Parliament passed a resolution calling for a ban on fully autonomous weapons – killer robots, if you will.
The resolution calls on the European Union (EU) member states, the Council of Ministers of the EU, and the EU’s High Representative for Foreign Affairs and Security Policy to “ban the development, production and use of fully autonomous weapons which enable strikes to be carried out without human intervention.”
From sci-fi to reality
Of course, a resolution is not a law. And a resolution by the European Parliament does not even come close. But it is a start. Perhaps something like Asimov’s first law may take the step from the realms of sci-fi into reality before true, autonomous killer robots do. Or at least, not too long after.
PS: In case you were thinking that the resolution is just another piece of science-fiction, please note that it doesn’t only concern the future. It also deals with the grim realities of the here and now, as it includes a call for the banning of the practice of so-called «extrajudicial targeted killings» – assassination by drone.