The incredible and bizarre chain of events that led law enforcement to this point is something that could be used easily as the back story to the film “Minority Report” with technology finally being developed that analyzes a criminal’s past history and then attempts to discover how likely they are to commit crimes in the future. The powerful program will not only analyze the likelihood of recidivism in criminals, but will actually make statements and recommendations on whether a parolee should be let off easy, or they should throw away the key.
The disturbing program uses a complex computer algorithm based on millions of criminal histories gathered over several years to make predictions on each criminal’s behavior. For example, if a criminal was arrested for armed robbery at the age of 14, then was picked up again for a similar crime the computer suggests that such an individual would likely commit other crimes similar in nature, and may eventually even be moved to murder. If, however, the criminal were to have committed a crime at age 30, the computer would suggest the likelihood of committing murder would be far lower for this individual.
Already the predictive power of the program is raising an important question of ethics for law enforcement individuals as well as the populace. For example, if certain individuals pose a threat to the populace, is it ethical to use predictive technology to decide how much of a threat they are and possibly incarcerate them even before they commit the crime? And what responsibility does the judicial system have to these individuals who have been labeled a threat, but have not actually committed any crimes? Is a crime predicted by the computer and then ignored by the operator a “pre-crime?” And if this is the case, should “pre-crimes” be labeled illegal or dealt with in any other way than simple intuition would be? Furthermore, is it the right of an individual to be informed of what the computer suggests they may do in the future? And what of those labeled as being “future criminals?”
It seems a disturbing move toward a dystopian society (and an unusually transparent one) if individuals are considered for criminal behavior based on what they may do in the future as opposed to the crimes they have already committed. The use of this technology must take into consideration a more highly advanced and developed form of the same software and algorithms? What if this relatively simple machine were perfected in such a way that it could make predictions about the likelihood of criminal behavior even at birth? Is a perfectly innocent child with a high risk of future criminal behavior based on genetic analysis to be monitored any differently than the same child would be with a lower risk of future criminal behavior?
While the system is still in its infancy, as with any new technology the future implications must be considered long in advance in order to prepare for the ethical and judicial puzzles that may confront us. Just as we would be judged before we become a criminal by this stream of data flowing through a computer, the potential human rights violations this system may perpetrate in the future must be analyzed by us as well.