By Marc Jampole
Does your spell check program ever frustrate you when it
changes the grammatically correct “the person who” to the incorrect “the person
that” or misses your mistake when you write the incorrect “the company and
their employees” instead of the correct “the company and its employees?”
We can blame these mistakes on the humans who programmed the
software.
But who will we blame when computerized robots decide to
bomb a village of innocent civilians while searching for an escaped soldier? Or
when an autonomous weapon decides on its own to start shooting wildly into a
shopping mall?
I’m not talking about drones, which humans operate at a
distance. Humans maintain full control over drones.
No, I’m referring to the next advance in weapons of mass
destruction: automated weapons that make the decision to shoot, to bomb or to
torch without human intervention, based upon the weapon’s completely
independent analysis of the situation. They’re called Lethal Autonomous Weapons
Systems (LAWS) and military contractors all over the world are working
furiously to develop them. The United States, Britain, Israel and South Korea
already use technologies seen as precursors to fully autonomous weapons systems,
according to a New York Times reportthat’s almost two years old.
You probably haven’t heard much about LAWS. My Google News
search revealed a total of 159 stories about them on the same day that close to
eight millions stories appeared about the aftermath of Rudy Giuliani’s absurd
accusation that President Barack Obama doesn’t love the United States and
almost 4.5 million stories covered the death of a minor actor named Ben Woolf
(who?).
Use of LAWS raises many technical issues. Opponents of LAWS
wonder if we can ever program a robot to make the subtle distinctions between
an enemy combatant and an innocent civilian, or to understand that the enemy
has moved its antiaircraft radar device to the roof of a hospital (an exampleI borrow from the Times Bill Keller)?
Then there is the issue of faulty programming that plagues automated systems
meant to check spelling and grammar, analyze loan applications, translate from
one language to another, evaluate essays or select products for purchase. And
what happens if an electrical surge or scratch in a printed circuit makes an
autonomous weapon go haywire? Or if some rogue programmer implants malware into
the system?
The moral issues raised by having robots make battle field
decisions for humans are even more troubling. Virtually all systems of human
morality start with the principle, “Thou Shall Not Kill.” Since the beginning
of recorded history thousands of philosophers, historians, soldiers,
politicians and creative writers have written many millions of words pondering
when killing another human being is justifiable. We honor those who kill in society’s name and
punish those whose murderous deeds society considers as unwarranted. The issue of the “just war” is one of the most
important themes in moral philosophy since at least the fourth century before
the Common Era.
From the birth of humans until today, every killing in
peacetime and war, condoned and unsanctioned, single deaths and mass
murders—all of it has been committed by individual human beings to whom we can
assign praise or blame, guilt or innocence. Taking the decision to pull the
trigger, drop the bomb or throw the grenade out of the hands of human beings
and putting into the hands of software is inherently immoral because it makes
it impossible to determine who really is responsible for a wartime atrocity.
The generals will blame the robot or hide behind the robot for justification,
claiming that the software is infallible.
Some proponents of LAWS argue that automation will lead to
more humane wars, since robots are not subject to mistakes in analysis,
vengefulness, panic, fear or other emotions that color the decisions made by
men and women in battle. That’s my definition of a sick joke—something that is
both funny and horrifying at the same time. The lack of emotion in a robot may
cause it to decide to level the village for strategic reasons, whereas a human
being might recognize that the deaths of innocents or destruction of historic
structures would make an attack unthinkable. And consider how much easier it
will be to go to war if all a government had to do was send out the robots. The
history of recent American wars suggest two dynamics: 1) the more our soldiers
die in a war, the more likely people are to turn against the war; and 2) the
number of deaths on the other side doesn’t sway most of the population from
supporting a war. It seems clear that having an army of autonomous robots that
hold within their operating systems the final decision to shoot or not will
lead to more and more violent wars. Holding computers up as more virtuous than
humans because they analyze dispassionately is the same kind of illogical
thought process as the standard rightwing argument that businesses can regulate
themselves but that society must carefully watch food stamp and Medicaid
recipients for fraud.
Building the atom bomb was a bad idea that many of the
scientists involved later regretted. Building lethal autonomous weapons systems
is another bad idea.
I’m advising all OpEdge readers to write, phone or email
their Congressional representatives, Senators and the President of the United
States every three to four months asking them to come out in favor of banning
all LAWS research and development in the United States and to work for a global
treaty to ban LAWS R&D internationally. The United States should impose the
same harsh sanctions on nations developing LAWS that we now impose on the
Soviet Union, Iran and North Korea. We should refuse to buy any military
armament from any private company ding LAWS R&D.
There’s a meeting of the United Nations Convention on
Conventional Weapons (CCW) dedicated to the issue of autonomous weapons on
April 13-17. I recommend that all readers email CCW at ccw@unog.ch and tell the organization that it should come out
against any further development of LAWS and recommend sanctions against nations
and businesses that develop LAWS.
In short we have to make LAWS against the law. Let’s not let
this genie get further out of the bottle.
No comments:
Post a Comment