Earlier this month, Argentina’s Ministry of Security announced the creation of an “Applied Artificial Intelligence for Security Unit,” a special task force whose remit will be to “use machine learning algorithms to analyze historical crime data to predict future crimes and help prevent them.” You can read the original announcement in Spanish here.

Now whatever arguments exist for and against the creation of this new crime fighting course, all the least funny people reading the headline of this story skipped the article entirely to post animated Gifs of Tom Cruise operating what appears to be an Xbox Kinnect. Because if you read the phrase “predict future crimes,” you are going to think Minority Report, the Cruise-starring and Steven Spielberg-directed adaptation of the Philip K. Dick story, The Minority Report. After all, human intelligence is genuinely not that useful for much more than playing Snap.

However, it’s worth noting that this is not even the first time that Minority Report has become a reality. In case you missed it, a couple of years ago the University of Chicago used publicly available data to predict future crimes a week before they happened with, it claimed, 90 percent accuracy. And in 2018, the West Midlands Police in the UK were researching a “National Data Analytics Solutions (NDAS)” that would use a “combination of AI and statistics” to predict violent crimes.

As well as references to Minority Report, stories like also also tend to invite re-sharings of a post made by writer Alex Blechman three years ago: “Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus.”

There’s truth in it. As much as hopeful science fiction such as Star Trek has inspired real-life technology with its communicators, hyposprays, and tricorders, the genre’s dystopian twin has provided plenty of cautionary tales that have led people to think, “That’s cool and I am going to pay money to make it happen.” Just look at all the research in the last 30 years that has gone into whether it is possible to make a theme park of dinosaurs!

But shouting Minority Report whenever someone decides to try their hand at this sort of technology is a problem, to the point where Minority Report’s worth as a piece of social commentary begins to break down. Let us explain.

The Majority Report

The first point of order is that in Minority Report, the police have not actually created an AI that can predict when crimes are going to happen. In Minority Report, the MacGuffin that allows for the “Pre-Crime Police” is not a computer, but three psychic mutants trapped in a bathtub. And we are further off from developing mutant-in-bath technology than you might think.

A closer analogue to the sort of pre-crime technology that repeatedly turns up in headlines is the TV show Person of Interest. One of the most post-9/11 shows you could possibly imagine, Person of Interest gives us an inventor who develops an AI that can predict the future with 100 percent accuracy. The government wants to use it to predict and prevent terrorist attacks, but when the inventor discovers the government is discarding predictions of other murders and violent crimes, he turns vigilante.

With both Minority Report and Person of Interest, any attempt to use these stories to analyze how these technologies could be applied in the real world quickly falls apart because of one crucial difference. In fiction, these technologies work.

It is not surprising that in the aftermath of 9/11 there were a lot of people interested in using computers to analyze data and predict who would become a terrorist. Very quickly these solutions ran into a problem (it is a debate for elsewhere whether this was actually a “problem” as far as the people implementing these solutions were concerned)—there are a lot more people who fulfill the criteria for “possible terrorist” than there are terrorists. Someone who is angry about politics and is buying a large quantity of fertilizer may be planning to build a bomb. But it is far more likely you are about to arrest an innocent man who likes to stay informed about current events and happens to enjoy gardening. Of course the criteria for these predictions are not limited to those who are buying fertilizer—it also includes demographic characteristics. The line between “predictive policing” and “racial profiling” is so blurry it’s practically impossible to see.

A specific real-life example occurred in Chicago in 2013 when a Black man named Robert McDaniel turned up on Chicago PD’s predictive “heat list.” Much like in Person of Interest, the system the police used forecast that McDaniel would be involved in a violent crime, but could not say whether he would be the shooter or the victim. So the police went to talk to him.

Since McDaniel was a Black man in a poor neighborhood who had been in trouble with the police before for marijuana-related offenses and street gambling, but nothing violent, it raised suspicions among some neighbors when he was seen being visited by police, but not arrested.

McDaniel found himself under constant police surveillance while friends began to distance themselves. People assumed he was informing the police, and McDaniel’s claims about a predictive “heat list” sounded like so much science fiction. Eventually, these suspicions led to him getting shot.

It’s depressingly, bitterly ironic. Almost like a bad sci-fi story. The algorithm designed to prevent crime caused the very crime it predicted, and an innocent man died.

Except that it is nowhere near that clever. For all the AI set-dressing these technologies use, the fact is that the selection criteria inevitably bake in the biases of the people who order them. Take this story, about a computer program that looked at two people who had been involved in an identical crime, but predicted the Black one was more likely to reoffend.

Of course Minority Report and Person of Interest are both science fiction, and as a rule people prefer the technology in their science fiction to work. Nobody wants to read a book about a time machine that can travel into the future at a rate of one second per a second.

But just like these supposed pre-crime programs, both of these stories come with biases baked in.

Pre-Crime, Not Pre-Justice

The opening of Minority Report sees a man come home early to find his wife sleeping with another man, grab a pair of scissors from the nightstand, and murder them both—or at least he would have if Tom Cruise hadn’t heroically dived in there and stopped him.

The would-be murderer is arrested for crimes that he would have committed and put in a suspended animation pod forever.

Now, even assuming a 100 percent functional and accurate crime prediction system without any pesky minority reports (the single outlying report that is the story’s one concession to the idea it might not be entirely accurate), you are still looking at a heap of legal and human rights issues before this is a workable system.

This was a crime of passion, committed in the moment, that never happened. Why is it a given that police should break in, beat this man up, and put him in a medically induced coma, rather than, say, break in, talk the guy down and offer him a course of counseling?

Person of Interest, with its opening credits and shots of CCTV footage interspersed between scenes makes itself out to be a commentary on the ubiquitous surveillance panopticon we are living in, but the characters’ main problem with the way the government uses the ultimate surveillance technology is that they don’t use it enough. It’s probably not a coincidence that Person of Interest is written by Jonathan Nolan, the brother of Christopher Nolan who gave us The Dark Knight’s “total surveillance technology is evil and oppressive, but it’s fine for Batman to use it just this once as a treat.”

A debate that should be about police overreach and right to privacy quickly dissolves into crude 24-style “Okay, but what if there was a baby strapped to a nuke?” style hypotheticals. Whether it is intentional or not, both of these pre-crime stories end up acting almost as propaganda for the kind of all-encompassing surveillance and severe police overreach they supposedly exist to critique.

But this is an issue we see turn up time and again throughout the entire science fiction genre.

Let’s Build the Torment Nexus Again

Let’s leave Minority Report and pre-crime behind for a moment to instead look at the constant push toward AI-controlled drone technology. Like pre-crime, whenever any news in this area turns up, it immediately floods social media with a million people making jokes about The Terminator’s Skynet.

The science fiction narrative is clear here. We give weapons to AI, the AI gains self-awareness, the AI kills its creators.

It is a threat taken seriously at the highest levels, such as this story from last year when Air Force Col. Tucker “Cinco” Hamilton said during a summit that there was a simulation where an AI identifying surface-to-air missile threats realized it got more points if its human operator didn’t tell it not to kill the threat. So the AI killed the operator because they were keeping it from accomplishing its objective.

It’s a terrifying story. It is also completely made up. Long after the whole of Twitter had used up all their Arnold Schwarzenegger reaction gifs, the U.S. Air Force released a statement saying that the “simulation” Hamilton talked about was “a hypothetical.”

Now most people, including plenty of science fiction writers, would take a hypothetical like this as an excellent reason not to build artificially intelligent killer robots. But the people coming up with these scenarios are often heavily involved in the sector.

And the clue as to why they keep pushing this narrative comes from the old IBM maxim that “a computer can never be held accountable, so has increasingly been used to make management decisions.” There are plenty of military scenarios where the idea of decisions being made by someone who cannot be held accountable is actually extremely desirable. In April, Israel reportedly used “AI powered” databases to draw up lists of bombing targets in Gaza.

People like Elon Musk evoke science fiction imagery when they talk about how AI is a risk to humanity while simultaneously piling money into creating it. That’s because no AI currently in development is going to surpass human intelligence. But it can be used to devalue human labor and drive down wages. 

There is nothing new about science fiction inadvertently stanning for the things it claims to be warning against. Science fiction writers have always loved the Torment Nexus. One of Star Trek’s most iconic villains, Khan Noonien Singh, was a superman created by eugenics. Once again, his first appearance in the TOS episode “The Space Seed,” which was a warning of the dangers of eugenics and how breeding humanity to be stronger and with greater intellects would lead to tyranny and oppression. Except in the episode it also, y’know, worked. Everything we know today about eugenics tells us it is cod science backed up by terrible ideology, but over 50 years later Khan still casts a super-powered shadow over the Star Trek universe.

Or look at nearly any science fiction dystopia you care to name—the countless numbers of Orwell imitators in the genre. Flawless surveillance, ruthless efficiency, absolute control, these are the things that characterize the tyrannical governments of science fiction. It is a portrait any real world oppressive regime would be flattered by—whereas in reality, they are more likely to resemble Terry Gilliam’s Brazil.

Science fiction is a genre that for all its camp and silliness loves to take itself seriously. It is the genre that asks the big questions, that is able to look around the corner and tell us about our rapidly changing world as it really is. And it is right to do so. Our world is becoming more science fictional all the time. Which means storytellers working in the genre need to think very carefully about what they are saying about that world.In Minority Report, Tom Cruise plays a tool of an oppressive system that uses the promise of security to violate citizens constitutional rights, but the message of that story can be lost when that oppressive system looks really, really cool.

The post A.I. Making Minority Report a Reality Shows Failure of Cautionary Sci-Fi Movies appeared first on Den of Geek.

Leave a Reply

Your email address will not be published.