How Automation Can Wage a New Form of Violence

Books

Reprinted from Disruptive Power: The Crisis of the State in the Digital Age by Taylor Owen with permission from Oxford University Press.  Copyright © 2015 by Oxford University Press.


The Violence of Algorithms

In December 2010, I attended a training session in Tysons Corner, Virginia, just outside Washington, DC, for an intelligence analytics software program called Palantir. Co-founded by Peter Thiel, a libertarian Silicon Valley billionaire from PayPal and Facebook, Palantir is a slick tool kit of data visualization and analytics used by the NSA, FBI, CIA, and other US national security and policing institutions. As far as I could tell, I was the only civilian in the course, which I took to explore Palantir’s potential for use in academic research.

Palantir is designed to pull together as much data as possible, then tag it and try to make sense of it. For example, all of the data about a military area of operation, including base maps, daily intelligence reports, mission reports, and the massive amounts of surveillance data now being collected could be viewed and analyzed for patterns in one platform. The vision being sold is one of total comprehension, of making sense of a messy operating environment flooded with data. The company has a Silicon Valley mentality: War is hell. Palantir can cut through the fog.

The Palantir trainer took us through a demonstration “investigation.” Each trainee got a workstation with two screens and various datasets: a list of known insurgents, daily intelligence reports, satellite surveillance data, and detailed city maps. We uploaded these into Palantir, one by one, and each new dataset showed us a new analytic capability of the program. With more data came greater clarity—which is not what usually happens when an analyst is presented with vast streams of data.

[[{"type":"media","view_mode":"media_small","fid":"599067","attributes":{"alt":"","class":"media-image media-image-left","style":"float: left;","typeof":"foaf:Image"}}]]

In our final exercise, we added information about the itinerary of a suspected insurgent, and Palantir correlated the location and time of one meeting with information it had about the movements of a known bombmaker. In “real life,” the next step would be a military operation: the launch of a drone strike, the deployment of a Special Forces team. Palantir had shown us how an analyst could process disparate data sources efficiently to target the use of violence. It was an impressive demonstration, and probably an easy sell for the government analysts taking the course.

But I left Tysons Corner with plenty of questions. The data we input and tagged included typos and other mistakes, as well as our unconscious biases. When we marked an individual as a suspect, that data was pulled into the Palantir  database as a discrete piece of information, to be viewed and analyzed by anyone with access to the system, decontextualized from the rationale behind our assessment. Palantir’s algorithms—the conclusions and recommendations that make its system “useful”—carry the biases and errors of the people who wrote them. For example, the suspected insurgent might have turned up in multiple intelligence reports, one calling him a possible threat and another that provided a more nuanced assessment of him. When the suspected insurgent is then cross-referenced with a known bombmaker, you can bet which analysis was prioritized. Such questions have not slowed down Palantir, which developed a billion-dollar valuation faster than any other American company before it, largely due to its government security contracts. In 2014 Palantir’s value was between $5 billion and $8 billion dollars.

And analysts who use it have no shortage of data to feed into the system. All around us sensors are collecting data at a scale and with a precision that in many cases is nearing real-time total surveillance. For example, wide-area surveillance, also called persistent ground surveillance systems, which local law-enforcement agencies use, create networks of video cameras to detect and analyze crime in near real time. Newer wide-area surveillance systems do not require a network of individual cameras but can instead take high-resolution images of many square miles at once. The Department of Homeland Security tethered such a motion-imagery system 2,000 feet above the desert in Nogales, Arizona. On its first night in use, the system identified 30 suspects who were brought in for questioning.

This type of video analysis requires new image-processing capabilities. The MATE system, for example, detects movement in the camera’s field of vision that the human eye, even a trained officer, would not notice; this can be used in airports to detect a suspicious bag. The Camero Xaver system uses 3D image reconstruction algorithms and ultra-wideband (UWB) sensors to create representations of objects behind barriers.

In other words, it can see through walls.

Facial recognition and other biometrics technology are also progressing rapidly. A pilot program in San Diego called the Automated Regional Justice Information System applies algorithms to individual frames from live video feeds and then can cross-reference a face against pictures in databases at a rate of a million comparisons a second. One of the founders of facial recognition technology, physicist Joseph J. Atick, is now cautioning against its proliferation, arguing that it is “basically robbing everyone of their anonymity.” A company called Extreme Reality has developed a biometric scanning system that takes images from surveillance video to create a map of a person’s skeleton and uses it as a baseline for detecting suspicious movements. Google Glass and other miniature cameras move us toward a world in which nothing is private, and all behavior is captured.

It’s not enough to collect all this data, and there are limits on the processing power that allows computers to make sense of it. But if research on quantum computing continues to progress at its current pace, those limits could disappear. For political theorist James Der Derian, this potential revolutionary advance in computational capability has dramatic implications for the international order. Whoever has access to quantum computing power will have such an advantage over the control and understanding of information that it could lead to a new kind of arms race. Those who possess quantum computers could in theory predict the stock market, model global weather patterns, make significant advances in artificial intelligence, and have the ability to process and understand vast stores of real-time surveillance data.

As Der Derian argues, this could signal a new age and form of war. “The goal is to convey a verbal facsimile of contemporary global violence as it phase-shifts from a classically scripted War 1.0 to an image-based War 2.0, to an indeterminate, probabilistic and observable-dependent form that defies fixation by word, number or image, that is, quantum war.”

Philosopher Paul Virilio warns of the potential for a future “information bomb” where disaster can occur simultaneously, everywhere on the planet. This is a concept acknowledged in theoretical physics but not in the social sciences. It is for this reason that Der Derian urges an integration of science and math into the study of international relations. He explains that disciplinary borders must be eliminated in favor of a “post classical approach,” one that moves away from the traditional linear and systematic understanding of war, to one that accounts for its messiness and non-linearity.

The potential power of quantum computing puts information control at the center of warfare. Andrew Marshall, director of the Office of Net Assessment, US Department of Defense, has said, if World War I was the chemists’ war, and World War II was the physicists’ war, World War III will be the information researchers’ war.

Over the course of history, the automation of military technology has put more distance between the soldier and his target. Crossbows, muskets, machine guns, and airplanes put more distance than previous technologies, but they still required human operation and decision making. Increasingly, however, the decisions made in battle are also being automated, eliminating a step of human involvement between analysis and action.

The idea of robotic war, and the protection that it promises, is nothing new. In 1495 Leonardo da Vinci proposed a “mechanical knight” made up of pulleys under a suit of armor. In 1898, Nikola Tesla built a remote-controlled boat that he tried to sell to the US military as an early form of torpedo, an idea that was implemented by the Germans in World War I. The United States first developed a gyroscope-guided bomb in 1914. Throughout the 20th century, most advances in autonomous weaponry involved missile guidance systems. In the 1950s and 1960s, both the Soviet Union and the United States began developing computer guided missiles that correct their flight autonomously. In 1978, the United States deployed the first GPS satellite, inaugurating a system that would greatly enhance the capabilities of unmanned aerial vehicles. These systems are not infallible, however; in 1988, an automated aircraft-defense system on  a US battleship in the Persian Gulf mistakenly shot down a commercial airliner, killing 290 people.

But it was only at the turn of the 21st century, with advances in drone technology and artificial intelligence, that the possibilities of robot war began to be realized. The United States has deployed 65 Lockheed Martin blimps in Afghanistan that provide real-time surveillance and data processing across 100 square kilometers at a time. These blimps are equipped with high definition cameras and sensors that detect sound and motion. The 360-degree Kestrel motion-imagery system, for example, can record all activity taking place in a city for periods of up to 30 days. To process all that information, the system only records activity that it assesses as being valuable, and its judgment evolves over time though machine learning.

The United States is not the only country using automated technology. Russia deployed armed robots to guard five ballistic missile installations. Each robot weighs nearly a ton. They can travel at speeds of 45 kilometers per hour, using radar and a laser rangefinder to navigate, analyze potential targets, and fire machine guns without a human pulling the trigger. Russia is planning on vastly increasing its use of armed robots, supposedly saving its military more than a billion dollars a year. South Korea’s Super Aegis 2 automated gun tower can lock onto a human target up to three kilometers away in complete darkness and automatically fire a machine gun, rocket launcher, or surface-to-air missile. For now, a human is required to make the final kill decision, but this is not technically required. South Korea has proposed deploying the Super Aegis 2 in the volatile demilitarized zone that separates it from North Korea. Communications between the South and North are terrible, making this move toward automatic killing in the demilitarized zone extremely dangerous. Automation is also used for defensive purposes. The Israeli Iron Dome is an air-defense system designed to shoot down rockets and artillery shells. Israeli officials claim that the Iron Dome, operational since March 2011, shot down more than  400 missiles in its first 18 months. Drones, from the surveillance blimps described, to swarms of microdrones outfitted with cameras and other sensors, represent another major advance that potentially transforms the way military intelligence is collected and processed.

Underlying all of these technologies  is computational power. With algorithmic technologies that trace and record movements of people (at airports, through credit card data, our passports, and visual or data surveillance technologies) we can detect  patterns and ascribe risk to behaviors outside the “norm.” The calibration of this norm can either be a human decision or a computational one, but in the end these norms are built into algorithms. Automation also offers the promise of predicting future events. As machines learn and algorithms develop, this process becomes further and further removed from our human intervention.

This distancing dilutes the responsibility of humans by acting through technologies. As surveillance expert Bruce Schneier notes, “any time we’re judged by algorithms, there’s the potential for false positive. . . . [O]ur credit ratings depend on algorithms; how we’re treated at airport security does, too.” And most alarming of all, drone targeting is partly based on algorithmic surveillance. Fully automated drones that can make decisions and even kill by themselves are still in development stages, but they are being actively tested.

As articulated in a 2014 Human Rights Watch report on automated war, “Fully autonomous weapons  represent the step beyond remote-controlled armed drones. Unlike any existing weapons, these robots would identify and fire on targets without meaningful human intervention. They would therefore have the power to determine when to take human life.” The international community is taking notice. In the past two years, numerous academic and policy reports have addressed the legal, ethical, and human-rights implications of automated killing, and at the 2014 UN Convention on Certain Conventional Weapons (CCW) conference, automated war was a topic of debate. A civil society campaign has been launched, called the Campaign to Stop Killer Robots.

Automation has radically reshaped the geography of violence. Just like Anonymous can wield power without occupying a discrete and contiguous geographical space, states can wage war without invading enemy territory. In practice, this has meant that the difference between international and domestic security paradigms has eroded. While the technology underpinning this capacity was already long in development, the defining moment in this shift was 9/11. The decentralized network of al-Qaeda attacked the heart of a global superpower on the other side of the world.

In response, the United States deviated from both domestic and international legal and military norms in pursuit of a diffuse organization. At home, the Bush administration began to operate under what they called the “one percent doctrine,” which dictates that if there is a 1% chance of an event occurring, the government must treat it as a certainty. This doctrine, combined with the questionable notion that 9/11 could  have been predicted with proper data, has led to a culture of massive data collection, from the NSA surveillance apparatus exposed by Edward Snowden to the widespread deployment of cameras, sensors, and drones. These programs seek to conquer the unknown and, like the promise of Palantir, create order from uncertainty. As geographer Louise Amoore argues, the law shifted to allow and accept the use of massive, invasive databases to monitor the civilian populations for the purpose of “risk management,” despite the potential to violate civil rights.

Understand the importance of honest news ?

So do we.

The past year has been the most arduous of our lives. The Covid-19 pandemic continues to be catastrophic not only to our health - mental and physical - but also to the stability of millions of people. For all of us independent news organizations, it’s no exception.

We’ve covered everything thrown at us this past year and will continue to do so with your support. We’ve always understood the importance of calling out corruption, regardless of political affiliation.

We need your support in this difficult time. Every reader contribution, no matter the amount, makes a difference in allowing our newsroom to bring you the stories that matter, at a time when being informed is more important than ever. Invest with us.

Make a one-time contribution to Alternet All Access , or click here to become a subscriber . Thank you.

Click to donate by check .

DonateDonate by credit card
Donate by Paypal
{{ post.roar_specific_data.api_data.analytics }}
@2024 - AlterNet Media Inc. All Rights Reserved. - "Poynter" fonts provided by fontsempire.com.