
iStockphoto

Audio By Carbonatix
In the 2002 film Minority Report, based on Philip K. Dick’s 1956 novella The Minority Report, the federal government plans to nationally implement a “precrime” police program where they use clairvoyant humans to arrest would-be murderers before they can commit their crime.
Now, according to Akhil Bhardwaj, an Associate Professor of Strategy and Organization at the University of Bath, “The U.K. government has decided to chase this chimera by investing in a program that seeks to preemptively identify who might commit murder.”
It’s not just in movies where such a plan was implemented either. As Bhardwaj points out, Joseph Stalin already did something similar to this (without the psychics), in real life.
“…under the communist regime, citizens were removed from society before they could cause harm to it,” he writes. “This removal, which often entailed a trip to the labor camp from which many did not return, took place in a manner that deprived the accused of due process. In many cases, the mere suspicion or even hint that an act against the regime might occur was enough to earn a one way ticket with little to no recourse. The underlying premise here that the officials knew when someone might commit a transgression. In other words, law enforcement knew where that line lies in people’s hearts.”
Of course, the U.K. government thinks it can do this better with its Homicide Prediction Project using, you guessed it, AI.
In this program, AI uses government and police data to profile people to “predict” who have a high likelihood to commit murder. There’s just one problem: how do you know if the AI algorithm is correct? After all, how does one know if the algorithm actually stops a crime or if the crime never would have occurred if no crime is ever actually committed?
“An algorithm’s estimates can be flawed, and the algorithm does not update itself because no one is held accountable,” Bhardwaj explains. “No algorithm can be expected to be accurate all the time; it can be calibrated with new data. But this is an idealistic view that does not even hold true in science; scientists can resist updating a theory or schema, especially when they are heavily invested in it. And similarly and unsurprisingly, bureaucracies do not readily update their beliefs.”
The U.K. government hasn’t said what it plans to do with their program other than to say it is conducting research for the purposes of “preventing and detecting unlawful acts.” That research, by the way, “consists of hundreds of thousands of people who never granted permission for their data to be used to train the system,” according to Bhardwaj.
He concludes that “it is rather strange that a democracy like the U.K. is revisiting a horrific and failed project from an authoritarian Communist country as a way of ‘protecting the public.'” Strange indeed.