• Skip to main content

Leapwise

Helping public service leaders with their toughest choices.

  • Our People
  • Case Studies
  • Library
  • Careers
  • Contact

Tom Gash

Four Ways of Overcoming Bias in Algorithmic Decision-Making

November 26, 2020 By Tom Gash

In a previous article, we looked at three different types of bias that can emerge when using algorithms to support (or make) decisions. Here, we put forward some remedies: four approaches that can ensure that making good use of data does not perpetuate or create bias.

1. Involve those who are aware of bias and the risks of algorithmic bias during the design and development of data-driven decision-making projects. This can be helped simply by building more diverse teams, who will be more aware of the risks of bias. But it will also be beneficial to check that as well as technical wizards your team has people with appropriate training or appreciation of social sciences, or even data ethics specialists.

2. Commission external reviews. Several criminal justice sector organisations have reduced bias by opening up their new tools to external scrutiny. An independent reviewer of Durham Police’s Checkpoint programme, for example, identified a risk of bias from including post code as a predictor of future offending behaviour – which could unfairly create bias against people from particular areas. The National Data Analytics Solution (NDAS) project run by West Midlands Police has put considerable work into ensuring transparency and ethical decision-making, inviting a review from the Alan Turing Institute’s Data Ethics Group. The West Midlands PCC has also put in place an independent data ethics panel.

3. Measure algorithmic performance and bias systematically.    

While precise details of how many decision support algorithms are coded is commercially sensitive, all justice organisations will have to be able to explain the tools they are experimenting with and their accuracy and cost-effectiveness in layman’s terms. The US Department of Commerce is leading the field in this area. Its Facial Recognition Vendor Test reports publicly on the accuracy and racial bias of AI systems submitted by a vast number of suppliers. We believe the UK should implement similar systems as a matter of urgency. There is arguably even a case for public sector inspectorates (or a new cross-sector inspectorate) to be given a role in scrutinising algorithms in detail and reporting publicly on any observed bias.

4. Create a more robust legislative framework.

Ensuring inspectorate access to commercially produced algorithms would be likely to require legislation. And further legislative change may be required to ensure proper use of data across the public and private sectors. The US is making some progress here but there is much more to do: not least because global corporations will require (and benefit from) some standardisation across geographies. With the UK exiting the EU, which has led on much work to ensure data protection and other data processing legislation, this is another area where the UK government might need to develop additional expertise post Brexit.

Share this:

Filed Under: Views

3 Types of Bias in Automated Decision-Making

November 12, 2020 By Tom Gash

Data is essential for good decision-making and advanced analytics and automation can be sources of better decision-making in every sector. Fears of bias are well warranted but these should lead to practical and radical steps to prevent bias, not the end of innovation.

The acceleration of automated decision-making in criminal justice

In late 2018, I wrote an article for WIRED predicting policing’s increased use of ‘augmented’ and automated decision-making. Since then, initial forays have spread across policing and other criminal justice agencies. 

Earlier this year, the Metropolitan Police Service followed South Wales and others, by announcing they would deploy facial recognition cameras. These use AI-powered software to check images against wanted lists and images of high-risk individuals. Humans are always present (and required, given the current accuracy of these systems) to check whether the cameras have identified a genuine target before any action is taken.

Cellebrite, a company which extracts and processes digital information works across the US and the UK to use AI to scan the deluge of digital information surrounding every crime. Algorithms are used to identify the information most likely to be relevant to an investigation.

Durham Police has now long been using an algorithm to identify offenders who are sufficiently low risk that they should receive a diversionary intervention to support behaviour change, rather than a traditional custodial or community punishment.

And in the US, dozens of states are using a range of algorithms to assess a criminal defendant’s likelihood of reoffending, and using this to inform sentencing disposals and other services provided. Northpoint, a commercial provider, offers one of the best known tools: COMPAS (which stands for Correctional Offender Management Profiling for Alternative Sanctions).

The performance of algorithmic decision-making

These project typically involve using algorithms to inform and advise on decisions rather than to make them. And they are certainly not perfect. For example, when South Wales Police trialled facial recognition software, only 8 percent of ‘potential matches’ identified by the system were validated by human operatives.

But what these systems often do quite effectively is save scarce resources and enable faster decision-making. By reducing the number of images requiring human review to identify suspects, for example, South Wales Police still saved huge amounts of time. By filtering out gigabytes of irrelevant digital data, Cellebrite can likewise save time for police, prosecutors and defence teams.

The problem of bias

Efficiency is one benefit. But we need to pay as much attention to whether new systems are fair and legitimate. Here, algorithmic decision-making has come under justified scrutiny because it can entrench – or even amplify – some of the bias against specific groups. Anybody involved in decision-making based on data, needs to understand at least three potential sources of race, sex, class or any other type of bias.

  1. Pre-existing: If there is existing bias in a system, algorithms may perpetuate it. Let’s assume that true reoffending rates for Black and White prison leavers are similar, for example, but Black communities are more heavily policed. This would lead likely lead to Black people being more likely to be caught if they did reoffend. As ‘actual’ reoffending is not measured (only the instances when police detect it), an algorithm might conclude that Black people were more likely to reoffend. In turn, this data could be used to justify still more investment in policing Black communities, creating a vicious circle of race-based discrimination.
  2. Technical: The design of a tool to support decision-making can accidentally skew decision-making. For example, if a tool advising on sentencing disposals shows only three possible sentencing disposals, and lists them in alphabetical order, then other sentencing disposals might not receive due attention. Similarly, results might be evaluated out of context. For example, without context, either a human observer or image recognition software seeing only a certain view of an incident, might misinterpret self-defence as aggression or the recovery of stolen goods as theft.
  3. Emergent: A particular challenge for machine learning and deep learning, emergent bias relates to bias resulting from the use of algorithms across new or unanticipated contexts. An example of this is seen in the way that algorithms that aggressively filter the information we see on platforms like Facebook work. By showing us what we are most likely to ‘like’, we actually end up seeing more homogeneous views. These then reinforce our own preconceptions or prejudices – and have probably contributed considerably to growing political polarisation and reduced social cohesion.

In our next article in this series, we will look at some of the ways of overcoming these biases in algorithmic decision…

Share this:

Filed Under: Views

Actively Manage your Decision-Making Environments: Decision Science for Police Leaders

October 27, 2020 By Tom Gash

Key Lesson # 7 from Decision Science: a New Resource for Police Leaders. This lesson is an excerpt and you can download the full guide here.

Police performance depends on millions of decisions – at the frontline and in the board room. Can the sector strengthen its decision-making muscles by harnessing insights from decision sciences?

Lesson #7: Actively Manage Your Decision-Making Environments

The series of lessons we have shared in this series are partial glimpses into the vast field of decision-making sciences. And the reality is that to improve decision-making in policing, we can’t simply to understand a few key decision-making concepts and nuggets of evidence. Rather, decision-making needs to be viewed as a critical organisational capability, and we need to develop our decision-making approaches at every level of policing organisations.

A more rigorous approach to any individual high value decision or strategy can, of course, be helpful. For multi-million pound decisions, a properly structured decision-making process supported by proper analytical skills and subject matter knowledge can save a fortune and much more effectively support improvement for the public.

But while a one-off process can show what a robust decision-making exercise looks like, this will not create the organisational infrastructure that police services need to become truly decisive and effective. For that, we need to build effective decision-making habits bit by bit. We need:

  • Good governance that places decisions with the people best placed to choose well
  • A rich array of data and information, drawn from both internal sources and open sources and harnessing both quantitative and qualitative insight
  • Feedback and measurement systems that truly inform decision-makers on the consequences of their decisions
  • New processes for engaging communities in decision-making and build police legitimacy
  • The ability to automate routine decisions, freeing up officers time for where their expertise are most needed
  • New ways of appraising business cases, managing projects and assessing performance
  • Enhanced meeting management disciplines and group problem solving models

Decision-making errors are, after all, made by organisations, not just people. Former South Yorkshire Police Chief Superintendent David Duckenfield, match commander on the tragic day of the Hillsborough disaster that resulted in 96 deaths, admitted he had “limited knowledge” of the role and that he probably “wasn’t the best man for the job on the day”. Mr Duckenfield was criticised for his poor decision-making but acquitted of gross negligence manslaughter last November after evidence at his trial showed that poor decisions dating back years by many people had contributed to the tragedy.

One of the key insights from studies of human decision-making over the years is, in fact, that individual decisions are powerfully shaped by the circumstances that decisions are made in. So we need to think about the decision-making context and slowly reshape our policing organisations to support better decisions – just as we need to reshape the public sphere to discourage criminal decisions.

This won’t happen overnight. But the steps required for better decision-making – improved governance, information management, skills and so forth – are already being taken by many policing organisations. And there are an increasing number of simple methods and technology tools that can help.

Amid intense scrutiny, now is the ideal time to accelerate progress. A sharper focus on decision-making can provide confidence to police leaders and the public that critical policy and operational decisions are robust. And appreciating that decision-making is a critical capability for modern policing organisations creates the opportunity to dramatically improve results for the public.

This is a version of an article produced for Police Professional.

You can download the full guide complete with all 7 key lessons for police leaders below.

* indicates required
Decision Science Insights


Share this:

Filed Under: Views

Knows Your Limits: Decision Science for Police Leaders

October 16, 2020 By Tom Gash

Lesson #5 from Decision Science: a New Resource for Police Leaders. This lesson is an excerpt and you can download the full guide here.

Police performance depends on millions of decisions – at the frontline and in the board room. Can the sector strengthen its decision-making muscles by harnessing insights from decision sciences?

Key Lesson #5: Know Your Limits

There is now a huge literature on ‘biases and heuristics’: the mental shortcuts and cognition processes that most commonly lead decision-makers astray (while also saving valuable headspace for other work). Three findings that are most often replicated in studies across domains are:

  1. We tend to go with the flow (default). In 2012, the British government was concerned that less than half of workers (47 per cent) had enrolled in pension schemes. They then made a simple policy change that has transformed pensions saving in the UK. Until 2012, employees had simply been given the option to enrol in a pension scheme. But having seen studies showing that we tend to prefer going with the flow, rather than making active decisions, government decided that all workers would have to automatically enrol in a pension scheme. In 2019, more than three quarters (77 per cent) of British employees were enrolled in a pension scheme. Even though people were given the same choice, the framing of decisions led to radically different results.
  2. We are pretty bad at understanding probabilities (and risks) and we are strongly influenced by the ways that statistical probabilities are presented to us. In general, we understand what odds mean reasonably well when they are presented as frequencies (eg, one in 10 people) but not as percentages. A US study of juror decision-making worryingly found that a standard way of presenting DNA match information (using percentages) resulted in 50 less guilty verdicts (something for both prosecutors and defence lawyers to think about). Or would it be clearer if I said that using percentages led to five in ten verdicts being guilty verdicts, but using a frequency description just over three in ten verdicts came in as guilty…
  3. We tend to be overconfident of our own abilities. Have you ever wondered why your projects keep coming in late and over budget? You aren’t alone! In a study of more 100 major government infrastructure projects, nine out of ten projects underestimated costs and the average cost overruns for rail projects was 45 per cent, tunnels and bridges 34 per cent and roads 20 per cent. They also overestimated the number of people who would use the new infrastructure dramatically. This was for two reasons. First, the ‘so-called’ superiority illusion. When they assess their driving skills, the vast majority of people in US, Swedish and UK studies say that they are ‘above average’ – and 9 out of 10 Americans think they are above average drives! Second, because people lie! ‘Lying planners’ appear to want their projects funded and built so much that they base their estimates not on robust analysis but on plausible estimates. This can be a particularly acute problem when certain incentives are in place – despite codes of ethics calling for more ethical conduct.

Simply understanding that these decision-making weaknesses exist can be the first step for overcoming them. But there are also specific techniques that at least partially overcome each of them. For example, when Leapwise works with partners, we help them overcome over-optimism when planning programmes and projects, by encouraging ‘reference class forecasting’. Rather than fooling ourselves, with over-optimistic bottom up estimates of project costs and benefits, we ask ‘what did other projects like this deliver, how long did they take and how much did they end up costing?’

This is a version of an article produced for Police Professional.

You can download the full guide complete with all 7 key lessons for police leaders below.

* indicates required
Decision Science Insights
Share this:

Filed Under: Views

Leadership Decision-Making: What Can We Learn From a Kansas City Police Chief?

October 7, 2020 By Tom Gash

Ex-FBI supremo Clarence Kelley knew that one of the secrets of good decision-making is to admit what you don’t know. What can we learn from him? Access our full decision science guide for police leaders here.

In 1961, a man named Clarence Kelley retired from the FBI and took up the job of leading the Kansas City Police Department. He joined the department at a tricky time. Crime had started to tick upwards across the United States, and as the decade progressed, Kansas and other police forces had to respond accordingly: with more police and more patrols, particular vehicle patrols.

In 1972, however, once he had firmly established his reputation and relationships in the city of his birth, Kelley went out on a limb and made a confession. As he put it, “Many of us in the department had the feeling we were training, equipping, and deploying men to do a job neither we, nor anyone else, knew very much about”. Working with the independent research body the Police Foundation, Kelley decided to do something about this. He embarked on one of the boldest experiments in criminal justice history. Over the course of a year, researchers tracked what happened when he entirely eliminated vehicle patrols from 5 districts, doubled them in five other districts, and left them the same in the remaining five areas.

Researchers were tasked with closely tracking what was happening to crime – and they introduced surveys of residents to augment the data recorded by the police. This data was intended to allow Kelley to reintroduce patrols if neglected areas collapsed into chaos. But, for two reasons, he never had to. First, Kelley had been poached to be the new FBI Director after J. Edgar Hoover’s precipitous fall from grace in the wake of the Watergate scandal. Kelley was replaced by Joseph McNamara, who agreed to carry on the experiment. Second, crime did not soar. In fact, the results were startling in their lack of drama. In the areas with double patrols, the areas with no patrols and those with stable patrols, crime trends were broadly the same. Vehicle patrols were not making the difference to crime rates most had hoped.

The easy conclusion to draw was that the police weren’t making a difference to crime. But in fact they do – as the wider experiments I share in my book Criminal reveal. What the experiment instead showed was that this particular mode of police activity – random vehicle patrols – was grossly ineffective. The results of the experiment therefore helped to encourage police departments in the US and internationally to gradually start following their own mantra: ‘Sir, madam. Get out of the vehicle’. And though it took several decades, the trial encouraged police to become more embedded in communities and more focused on targeted patrolling of specific crime hotspots.

The style of leadership exhibited by Kelley and McNamara is one that is open about the limits of our knowledge about ‘what works’; willing to experiment and take risks; eager to collaborate with those outside policing; and not in thrall to conventional wisdom. And their approach, championed by George Kelling of the Police Foundation and by several other police leaders and academics has slowly but surely led to the development of the evidence-based policing movement, and the continual evolution of police practice across the world.

Creating experiments in policing – both to test what reduces crime, what works in improving victim and witness satisfaction for crime, and even in management approaches is challenging. But these issues are relatively easily overcome with the right support and expertise. Indeed, it is now possible to harness real time data to create an experimental ecosystem that is continually testing what works and informing decisions. The truth is that the hardest part of what Kelley did is to admit ignorance and to question whether what he was already doing was working.

Please get in touch if you are looking to build a culture of learning and evidence-based decision-making in your organisation, or if you are trying to improve your evaluation and performance measurement capabilities. We also love to hear and share stories of your success.

This is a version of an article produced for Police Oracle.

The full story on where and how police activities affect crime rates is found in chapter eight of CRIMINAL: THE TRUTH ABOUT WHY PEOPLE DO BAD THINGS, available here.

Subscribe

* indicates required
Decision Science Insights
Share this:

Filed Under: Views

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »
  • Cookie Policy
  • Terms and Conditions
  • LinkedIn

Leapwise Ltd Copyright © 2023 · Company Registered in England: 09887339. ICO Registration: A8762144. VAT: 228 1579 94