When an inmate requests certain permits in Spain, such as third degree or conditional release, the judge who must authorize or deny the request receives a report on the inmate. This document, prepared by an interdisciplinary team from the penitentiary, provides elements on which to sustain the decision. In Catalonia, part of that report is produced by an algorithm. It is called RisCanvi and, for 12 years, it has calculated the risk of recidivism of prisoners. Despite being active for so long, few people know that an evaluation as sensitive as this falls on a machine. The convenience of using an automated system in these tasks divides the experts. Its effectiveness, in light of the available data, is not bad.
The tool applies to everyone who enters prison. The prisoners of the you process They have not been an exception: no one escapes from physical, psychological or algorithmic recognition, although these reports are confidential. RisCanvi is updated every six months with data provided to the system by officials. At first, a short version of the system is applied (RisCanvi Screening), which consists of ten items, such as the inmate’s violent history, the age at which he committed the first offense, if he has problems with drugs or alcohol, or if has resources and family support. The result of the algorithm can be low risk or high risk. If it is high, the extended version is used, made up of 43 elements and for the preparation of which it is also necessary for an official to carry out a scheduled interview with the inmate.
In the full version of RisCanvi, issues such as the distance between your home and the jail, the criminal history of your environment, educational level, socialization difficulties, IQ, personality disorders are taken into account, if the interested party is the main source family income or their level of impulsivity and emotional instability. Each of these elements is automatically weighted and its weight varies depending on sex, age and nationality. The final verdict of the algorithm can be low, medium, or high risk.
Some 800 psychologists, jurists, criminologists, educators or social workers regularly operate this program. The algorithm is fed by the evaluations of the inmates that the officials add and from the prison databases themselves (days of sentence, penalties, etc.).
The defenders of RisCanvi (contraction of risc Y canvi, risk and change in Catalan) argue that this tool has provided clarity and rigor in a very sensitive procedure. Critical voices claim that it should be subjected to annual algorithmic audits and that it be clarified what weight the verdict of this system should have in the final decision made by the judge.
Catalonia has specific powers in the penitentiary field. It is governed by the same laws as the rest of Spain, but the management of the centers is its own. Hence, it could implement this system, non-existent in other communities but similar to those that work in countries such as Canada, the United States or the United Kingdom.
Their promoters focused on those experiences. “We wanted to be more correct in the prognosis, especially with low-risk inmates who were sometimes in jail without leaving,” recalls Jordi Camps, head of rehabilitation services at the General Directorate of Penitentiary Affairs. “We understood that a more scientific method could help us to unify prediction criteria and improve these predictions.”
The Camps team got in touch in 2007 with the Group for Advanced Studies in Violence of the University of Barcelona (UB), whose researchers are pioneers in the prediction of violent behavior. “For two years we were studying the risk factors of the inmates and we selected those that have a predictive capacity. It was organized in the form of a questionnaire and the algorithms were developed to make the predictions ”, explains Dr. Antonio Andrés Pueyo, Professor of Psychology and director of the research group that developed RisCanvi.
From the analysis of a database with 800 cases, the 43 items that the tool consists of today were obtained, in whose operation artificial intelligence does not intervene. “The system was built with basic techniques of multiple regression, it does not have great complexity”, underlines Pueyo.
The algorithm was released in 2009 and has since been readjusted and recalibrated. “They were based on UK OASys system and others such as the Dutch or the Canadian, but they designed an instrument adapted to our particular risk factors ”, points out Marian Martínez, psychologist at the Specialized Intervention Programs Unit for Prison Affairs.
It works well?
Going into detail, RisCanvi predicts five things: general recidivism, violent recidivism, intra-prison violence, self-harm in prison, and violation of permits. “After analyzing the data for a couple of years, I think that RisCanvi is not drastically better or worse than similar systems in other countries,” concludes Carlos Castillo, professor in the Department of Information and Communication Technologies at Pompeu Fabra University.
Castillo has had access to the RisCanvi algorithm. Yours is of the few independent studies that have been done on this tool. “The AUC level [un indicador usado en los análisis de rendimiento algorítmicos] is 0.7, where 0.5 is random and 1.0 is a perfect system. The interesting thing is that if you ask a human to carry out this task with the information from the 43 items, a performance is obtained in more or less the same order ”, he illustrates. That speaks highly of the elements that the tool analyzes. Its identification and description, and not so much the mechanical aspect, is for Castillo the greatest strength of RisCanvi.
The only public study evaluating the performance of the algorithm was done by Center d’Estudis Jurídics y Formació Especialitzada (CEJFE) in 2014 and measures the recidivism of prisoners during the first four years after leaving prison. The conclusions are conclusive: 77% of the inmates who were identified as medium or high risk relapsed. And 57% of those who did not reoffend had been labeled low risk.
RisCanvi’s assessment of the inmates may be modified by the professional who is preparing the report if he or she considers that it is not adequate. According to Camps, this happens only in 1% of the cases. On the other hand, “that the risk of recidivism detected by RisCanvi is high does not necessarily mean that the inmate will not leave, but rather that additional measures must be taken, such as reducing the permit or putting a telematic bracelet on him,” Martínez illustrates. “If it continues to be used 12 years later, it means that it works internally and that no crisis has cast suspicion on the system,” concludes Pueyo.
“RisCanvi is good at detecting low-risk cases, which was one of the goals,” says Camps. “It unifies the criteria and puts order in which are the risk factors that can predict certain behaviors.” Camps recalls that those who work with inmates may have their own biases. Rapists, for example, tend to have very good behavior in prison, and that could blind an official’s assessment. There may also be someone who penalizes Roma inmates or drug addicts. All these biases would be mitigated thanks to the algorithm.
Lack of transparency
“So far we have not found evidence that RisCanvi contains great discrimination,” says Castillo. Still, the researcher is of the opinion that the algorithm should undergo much greater scrutiny. The draft European Regulation of Artificial Intelligence, in fact, frames a system like RisCanvi among those of high risk. “That a person is in jail is the result of layers and layers of marginalization and exclusion. The criminal justice data is some of the most skewed there can be. That is why we must be especially careful with them and with the algorithms that are going to be applied to them ”, argues Castillo.
The consulting firm Eticas Consulting will carry out an external audit of the algorithm in the third quarter of this year. “We will apply reverse engineering to analyze the system: we will talk with those affected to see if any group has been discriminated against,” underlines its general director, Gemma Galdón. He is concerned that the verdict of the algorithm may or may not be considered by the official. “That is a very serious problem. If you implement an algorithm, there must be very clear instructions about it. If you leave it to the discretion of the user, there will always be someone who trusts the decision of the software and just validate it ”.
“It is important to see how an algorithm works, but it is more important to know how it is being used,” agrees Lorena Jaume-Palasí, an expert in algorithm ethics and advisor to the Government and the European Parliament. “There are people who hide behind the algorithm when it suits them and are also capable of rejecting those suggestions when they do not square with their prejudices. We have already seen that racist judges in the United States agreed with the Compas system only if he was severe with blacks ”.
“RisCanvi has a lot of room for improvement,” acknowledges Camps. Although his assessment goes in another line: “I think that shortly we will go towards the use of machine learning or some more sophisticated analysis modality than the one we have now ”.