A Closer Look at the UK’s New Algorithmic Transparency Standard
Governments are increasingly using advanced algorithms to automate or help make decisions that affect people’s lives, from marking exams to detecting fraud. This has led to calls for greater transparency to ensure that algorithms do not entrench bias, that people can understand AI-based outcomes, and, if necessary, challenge them. In response, some governments are taking preliminary steps to standardise and govern algorithmic transparency.
At the end of 2021, the UK Cabinet Office and Centre for Data Ethics and Innovation (CDEI) published one of the world’s first national algorithmic transparency standards. The standard is simple — it consists of a template that public sector organisations are encouraged to complete for any algorithmic tool that is either used to engage directly with the public (e.g., a chatbot) or meets a set of risk-based criteria. The information collected will be published in a public register. Over the coming months, the standard will be piloted by public sector organisations on a voluntary basis and will be put forward for formal endorsement later this year.
This is one of the UK Government’s first steps in a broader government agenda to promote the trustworthy use of data and AI. It is also indicative of a broader direction of travel in the public sector. The UK standard builds on similar sub-national and national algorithmic transparency initiatives introduced in France, Amsterdam, Helsinki, and New York.
IDC’s Take
Governments should take a balanced approach to transparency standards that is proportionate to risk and accounts for the associated workload for civil servants. Simplicity is a requisite for success. If the standard is too complex or cumbersome, public sector organisations will struggle to implement it, and it may inadvertently disincentivise the use of this technology, despite the potential benefits. For example, in France, the Digital Republic Act mandates transparency for some public algorithms, yet, at the outset, agencies struggled to comply, in part due to a lack of capacity and clear guidance.
Mindful of this, the CDEI has introduced a set of risk-based criteria. Not all algorithms will need to be included on the register, only those that are (i) complex (e.g., use machine learning), (ii) have potential legal, social, or economic impacts on individuals or populations (iii) replace/assist human decision making. This will cut the red tape to enable more mundane public administration functions, such as invoice recognition, that pose a minimal risk in terms of algorithmic bias.
Further, government organisations should not be content with only complying with standards. This is only one piece of the puzzle. They should look holistically at the use of explainability in algorithm design, as well as data quality and management.
What Does this Mean for Government Organisations and AI Suppliers?
IDC expects to see increased regulation of advanced algorithms in the public sector and beyond. Governments in countries with voluntary or mandatory transparency standards should get ahead of the curve and begin implementing the standard straight away, socialising it within their organisation and building sufficient capacity to meet these requirements. Suppliers and service providers working on public sector contracts will also need to be ready to provide this information for eligible algorithm projects. They can also work with governments to take a more holistic approach by promoting best practices for explainability and data quality management.
If you have an IDC subscription, read our new publication to find out more on how public sector organisations and vendors can prepare for algorithmic transparency reforms.
We will continue to monitor the UK pilot and other transparency initiatives. Get in touch to discuss this with our analysts: Louisa Barker; Massimiliano Claps; Neil Ward-Dutton
Related Publications:
Algorithmic Transparency in Government: Early Efforts to Standardize Practice
IDC PeerScape: Practices for Successfully Delivering Explainable AI