Data is essential for good decision-making and advanced analytics and automation can be sources of better decision-making in every sector. Fears of bias are well warranted but these should lead to practical and radical steps to prevent bias, not the end of innovation.
The acceleration of automated decision-making in criminal justice
In late 2018, I wrote an article for WIRED predicting policing’s increased use of ‘augmented’ and automated decision-making. Since then, initial forays have spread across policing and other criminal justice agencies.
Earlier this year, the Metropolitan Police Service followed South Wales and others, by announcing they would deploy facial recognition cameras. These use AI-powered software to check images against wanted lists and images of high-risk individuals. Humans are always present (and required, given the current accuracy of these systems) to check whether the cameras have identified a genuine target before any action is taken.
Cellebrite, a company which extracts and processes digital information works across the US and the UK to use AI to scan the deluge of digital information surrounding every crime. Algorithms are used to identify the information most likely to be relevant to an investigation.
Durham Police has now long been using an algorithm to identify offenders who are sufficiently low risk that they should receive a diversionary intervention to support behaviour change, rather than a traditional custodial or community punishment.
And in the US, dozens of states are using a range of algorithms to assess a criminal defendant’s likelihood of reoffending, and using this to inform sentencing disposals and other services provided. Northpoint, a commercial provider, offers one of the best known tools: COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions).
The performance of algorithmic decision-making
These project typically involve using algorithms to inform and advise on decisions rather than to make them. And they are certainly not perfect. For example, when South Wales Police trialled facial recognition software, only 8 percent of ‘potential matches’ identified by the system were validated by human operatives.
But what these systems often do quite effectively is save scarce resources and enable faster decision-making. By reducing the number of images requiring human review to identify suspects, for example, South Wales Police still saved huge amounts of time. By filtering out gigabytes of irrelevant digital data, Cellebrite can likewise save time for police, prosecutors and defence teams.
The problem of bias
Efficiency is one benefit. But we need to pay as much attention to whether new systems are fair and legitimate. Here, algorithmic decision-making has come under justified scrutiny because it can entrench – or even amplify – some of the bias against specific groups. Anybody involved in decision-making based on data, needs to understand at least three potential sources of race, sex, class or any other type of bias.
- Pre-existing: If there is existing bias in a system, algorithms may perpetuate it. Let’s assume that true reoffending rates for Black and White prison leavers are similar, for example, but Black communities are more heavily policed. This would lead likely lead to Black people being more likely to be caught if they did reoffend. As ‘actual’ reoffending is not measured (only the instances when police detect it), an algorithm might conclude that Black people were more likely to reoffend. In turn, this data could be used to justify still more investment in policing Black communities, creating a vicious circle of race-based discrimination.
- Technical: The design of a tool to support decision-making can accidentally skew decision-making. For example, if a tool advising on sentencing disposals shows only three possible sentencing disposals, and lists them in alphabetical order, then other sentencing disposals might not receive due attention. Similarly, results might be evaluated out of context. For example, without context, either a human observer or image recognition software seeing only a certain view of an incident, might misinterpret self-defence as aggression or the recovery of stolen goods as theft.
- Emergent: A particular challenge for machine learning and deep learning, emergent bias relates to bias resulting from the use of algorithms across new or unanticipated contexts. An example of this is seen in the way that algorithms that aggressively filter the information we see on platforms like Facebook work. By showing us what we are most likely to ‘like’, we actually end up seeing more homogeneous views. These then reinforce our own preconceptions or prejudices – and have probably contributed considerably to growing political polarisation and reduced social cohesion.
In our next article in this series, we will look at some of the ways of overcoming these biases in algorithmic decision…