A future for drones: Automated killing

Eugen Leitl eugen at leitl.org
Wed Sep 21 01:43:34 PDT 2011


http://www.washingtonpost.com/national/national-security/a-future-for-drones-automated-killing/2011/09/15/gIQAVy9mgK_print.html

A future for drones: Automated killing

By Peter Finn,

One afternoon last fall at Fort Benning, Ga., two model-size planes took off,
climbed to 800 and 1,000 feet, and began criss-crossing the military base in
search of an orange, green and blue tarp.

The automated, unpiloted planes worked on their own, with no human guidance,
no hand on any control.

After 20 minutes, one of the aircraft, carrying a computer that processed
images from an onboard camera, zeroed in on the tarp and contacted the second
plane, which flew nearby and used its own sensors to examine the colorful
object. Then one of the aircraft signaled to an unmanned car on the ground so
it could take a final, close-up look.

Target confirmed.

This successful exercise in autonomous robotics could presage the future of
the American way of war: a day when drones hunt, identify and kill the enemy
based on calculations made by software, not decisions made by humans. Imagine
aerial bTerminators,b minus beefcake and time travel.

The Fort Benning tarp bis a rather simple target, but think of it as a
surrogate,b said Charles E. Pippin, a scientist at the Georgia Tech Research
Institute, which developed the software to run the demonstration. bYou can
imagine real-time scenarios where you have 10 of these things up in the air
and something is happening on the ground and you donbt have time for a human
to say, bI need you to do these tasks.b It needs to happen faster than that.b

The demonstration laid the groundwork for scientific advances that would
allow drones to search for a human target and then make an identification
based on facial-recognition or other software. Once a match was made, a drone
could launch a missile to kill the target.

Military systems with some degree of autonomy b such as robotic, weaponized
sentries b have been deployed in the demilitarized zone between South and
North Korea and other potential battle areas. Researchers are uncertain how
soon machines capable of collaborating and adapting intelligently in
battlefield conditions will come online. It could take one or two decades, or
longer. The U.S. military is funding numerous research projects on autonomy
to develop machines that will perform some dull or dangerous tasks and to
maintain its advantage over potential adversaries who are also working on
such systems.

The killing of terrorism suspects and insurgents by armed drones, controlled
by pilots sitting in bases thousands of miles away in the western United
States, has prompted criticism that the technology makes war too antiseptic.
Questions also have been raised about the legality of drone strikes when
employed in places such as Pakistan, Yemen and Somalia, which are not at war
with the United States. This debate will only intensify as technological
advances enable what experts call lethal autonomy.

The prospect of machines able to perceive, reason and act in unscripted
environments presents a challenge to the current understanding of
international humanitarian law. The Geneva Conventions require belligerents
to use discrimination and proportionality, standards that would demand that
machines distinguish among enemy combatants, surrendering troops and
civilians.

bThe deployment of such systems would reflect a paradigm shift and a major
qualitative change in the conduct of hostilities,b Jakob Kellenberger,
president of the International Committee of the Red Cross, said at a
conference in Italy this month. bIt would also raise a range of fundamental
legal, ethical and societal issues, which need to be considered before such
systems are developed or deployed.b

Drones flying over Afghanistan, Pakistan and Yemen can already move
automatically from point to point, and it is unclear what surveillance or
other tasks, if any, they perform while in autonomous mode. Even when
directly linked to human operators, these machines are producing so much data
that processors are sifting the material to suggest targets, or at least
objects of interest. That trend toward greater autonomy will only increase as
the U.S. military shifts from one pilot remotely flying a drone to one pilot
remotely managing several drones at once.

But humans still make the decision to fire, and in the case of CIA strikes in
Pakistan, that call rests with the director of the agency. In future
operations, if drones are deployed against a sophisticated enemy, there may
be much less time for deliberation and a greater need for machines that can
function on their own.

The U.S. military has begun to grapple with the implications of emerging
technologies.

bAuthorizing a machine to make lethal combat decisions is contingent upon
political and military leaders resolving legal and ethical questions,b
according to an Air Force treatise called Unmanned Aircraft Systems Flight
Plan 2009-2047. bThese include the appropriateness of machines having this
ability, under what circumstances it should be employed, where responsibility
for mistakes lies and what limitations should be placed upon the autonomy of
such systems.b

In the future, micro-drones will reconnoiter tunnels and buildings, robotic
mules will haul equipment and mobile systems will retrieve the wounded while
under fire. Technology will save lives. But the trajectory of military
research has led to calls for an arms-control regime to forestall any
possibility that autonomous systems could target humans.

In Berlin last year, a group of robotic engineers, philosophers and human
rights activists formed the International Committee for Robot Arms Control
(ICRAC) and said such technologies might tempt policymakers to think war can
be less bloody.

Some experts also worry that hostile states or terrorist organizations could
hack robotic systems and redirect them. Malfunctions also are a problem: In
South Africa in 2007, a semiautonomous cannon fatally shot nine friendly
soldiers.

The ICRAC would like to see an international treaty, such as the one banning
antipersonnel mines, that would outlaw some autonomous lethal machines. Such
an agreement could still allow automated antimissile systems.

bThe question is whether systems are capable of discrimination,b said Peter
Asaro, a founder of the ICRAC and a professor at the New School in New York
who teaches a course on digital war. bThe good technology is far off, but
technology that doesnbt work well is already out there. The worry is that
these systems are going to be pushed out too soon, and they make a lot of
mistakes, and those mistakes are going to be atrocities.b

Research into autonomy, some of it classified, is racing ahead at
universities and research centers in the United States, and that effort is
beginning to be replicated in other countries, particularly China.

bLethal autonomy is inevitable,b said Ronald C. Arkin, the author of
bGoverning Lethal Behavior in Autonomous Robots,b a study that was funded by
the Army Research Office.

Arkin believes it is possible to build ethical military drones and robots,
capable of using deadly force while programmed to adhere to international
humanitarian law and the rules of engagement. He said software can be created
that would lead machines to return fire with proportionality, minimize
collateral damage, recognize surrender, and, in the case of uncertainty,
maneuver to reassess or wait for a human assessment.

In other words, rules as understood by humans can be converted into
algorithms followed by machines for all kinds of actions on the battlefield.

bHow a war-fighting unit may think b we are trying to make our systems behave
like that,b said Lora G. Weiss, chief scientist at the Georgia Tech Research
Institute.

Others, however, remain skeptical that humans can be taken out of the loop.

bAutonomy is really the Achillesb heel of robotics,b said Johann Borenstein,
head of the Mobile Robotics Lab at the University of Michigan. bThere is a
lot of work being done, and still we havenbt gotten to a point where the
smallest amount of autonomy is being used in the military field. All robots
in the military are remote-controlled. How does that sit with the fact that
autonomy has been worked on at universities and companies for well over 20
years?b

Borenstein said human skills will remain critical in battle far into the
future.

bThe foremost of all skills is common sense,b he said. bRobots donbt have
common sense and wonbt have common sense in the next 50 years, or however
long one might want to guess.b





More information about the cypherpunks-legacy mailing list