In a previous article, we looked at three different types of bias that can emerge when using algorithms to support (or make) decisions. Here, we put forward some remedies: four approaches that can ensure that making good use of data does not perpetuate or create bias.
1. Involve those who are aware of bias and the risks of algorithmic bias during the design and development of data-driven decision-making projects. This can be helped simply by building more diverse teams, who will be more aware of the risks of bias. But it will also be beneficial to check that as well as technical wizards your team has people with appropriate training or appreciation of social sciences, or even data ethics specialists.
2. Commission external reviews. Several criminal justice sector organisations have reduced bias by opening up their new tools to external scrutiny. An independent reviewer of Durham Police’s Checkpoint programme, for example, identified a risk of bias from including post code as a predictor of future offending behaviour – which could unfairly create bias against people from particular areas. The National Data Analytics Solution (NDAS) project run by West Midlands Police has put considerable work into ensuring transparency and ethical decision-making, inviting a review from the Alan Turing Institute’s Data Ethics Group. The West Midlands PCC has also put in place an independent data ethics panel.
3. Measure algorithmic performance and bias systematically.
While precise details of how many decision support algorithms are coded is commercially sensitive, all justice organisations will have to be able to explain the tools they are experimenting with and their accuracy and cost-effectiveness in layman’s terms. The US Department of Commerce is leading the field in this area. Its Facial Recognition Vendor Test reports publicly on the accuracy and racial bias of AI systems submitted by a vast number of suppliers. We believe the UK should implement similar systems as a matter of urgency. There is arguably even a case for public sector inspectorates (or a new cross-sector inspectorate) to be given a role in scrutinising algorithms in detail and reporting publicly on any observed bias.
4. Create a more robust legislative framework.
Ensuring inspectorate access to commercially produced algorithms would be likely to require legislation. And further legislative change may be required to ensure proper use of data across the public and private sectors. The US is making some progress here but there is much more to do: not least because global corporations will require (and benefit from) some standardisation across geographies. With the UK exiting the EU, which has led on much work to ensure data protection and other data processing legislation, this is another area where the UK government might need to develop additional expertise post Brexit.