Background – white paper response on the UK’s approach to AI regulation
In February 2024, the UK Department for Science, Innovation, and Technology (DSIT) set out the government’s proposed approach to AI regulation. It published a response to its consultation on its 2023 white paper, ‘A pro innovation approach to AI regulation’ (the White Paper). DSIT confirmed that, for the time being, the UK will follow its proposed approach of setting cross-sectoral principles to be enforced by existing regulators rather than passing new legislation to regulate AI. These principles (the Principles) are:
- Safety, security and robustness.
- Appropriate transparency and explainability.
- Fairness.
- Accountability and governance.
- Contestability and redress.
The letters
At the time of publication of the response to the White Paper, ministers wrote to key sectoral and cross-economy regulators requesting an update on how they are taking forward the White Paper proposals and the steps they are taking to develop their strategic approaches to AI. The following regulators received a letter:
- the Financial Conduct Authority (FCA)
- the Bank of England (BoE)
- the Information Commissioner’s Office (ICO)
- the Office of Communications (Ofcom)
- the Medicines and Healthcare products Regulatory Agency
- the Office for Nuclear Regulation (ONR)
- the Health and Safety Executive (HSE)
- the Office for Standards in Education, Children’s Services and Skills
- the Office of Qualifications and Examinations Regulation
- the Legal Services Board
- the Office of Gas and Electricity Markets (Ofgem)
- the Competition and Markets Authority (CMA)
- the Equality and Human Rights Commission (EHRC).
DSIT published the responses to these letters on 1 May 2024. These responses set out actions regulators have taken, upcoming plans, and how they are collaborating through the Digital Regulation Cooperation Forum (DRCF), with other regulators, and with other stakeholders. Below, we summarise responses from DRCF members the CMA, FCA, ICO, and Ofcom, along with the BoE and Prudential Regulation Authority (PRA), Ofgem, the ONR, and the EHRC .
Review our summary of key responses or our take on the responses.
Summary of key responses
Regulator | Summary of response |
CMA | The CMA’s response took the form of a strategic update on its approach to AI, highlighting its role in ensuring that consumers, businesses, and the wider economy reap the benefits of developments in AI, while harms are mitigated. The CMA’s work on foundation models has allowed it to investigate how foundation model markets work and the current and possible future implications for competition and consumer protection. The CMA’s publications have included an initial report, update paper, technical update report and its Trends in Digital Markets report. More broadly, it considers potential issues that could be raised by AI (for example, anti-competitive self-preferencing). The CMA will continue to monitor for potential competition and consumer protection concerns, applying its six principles to identify any such concerns: access, diversity, choice, fair dealing, transparency, and accountability. For more information on the CMA’s publications, see our post. The CMA plans to utilise new powers under the Digital Markets, Competition and Consumers Bill to raise standards in the market. It stresses the need for cooperation with other regulators and highlights its work through the DRCF, as well as directly with regulators such as the ICO. For more information on the interaction between AI and competition law, watch our video interview. |
FCA | In its AI update, the FCA welcomes the government’s principles-based, sector-led approach to AI and confirms that it is focused on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets – including close scrutiny of the systems and processes firms have in place to ensure regulatory expectations are met. The document outlines the ways in which the FCA’s approach to regulation and supervision addresses the Principles. It also sets out what the FCA plans to do in the next 12 months in relation to AI, including: Continuing to further its understanding of AI deployment in UK financial markets.Building on existing foundations: the existing framework aligns with and supports the Principles in various ways, but the FCA will closely monitor the situation and may actively consider future regulatory adaptations if needed.Collaboration, including regular collaboration with domestic partners and further prioritising international engagement.Testing for beneficial AI, including working with DRCF member regulators to deliver the pilot AI and Digital Hub, whilst running its own Digital Sandbox and Regulatory Sandbox, and exploring an AI sandbox.Its own use of AI, including investing more into technologies to proactively monitor markets, including for market surveillance purposes.Looking towards the future: it is taking a proactive approach to understanding emerging technologies and their potential impact as part of its Emerging Technology Research Hub. The AI update was published alongside a speech by chief executive Nikhil Rathi announcing the FCA’s plans to focus on Big Tech and Feedback Statement FS24/1 on data asymmetry between Big Tech and firms in financial services. See our update on the FCA’s plans for further detail. |
BoE, including the PRA | The BoE’s response took the form of a joint letter with the PRA. The letter outlines the BoE’s and the PRA’s statutory objectives and secondary objectives, and explains that their focus is on understanding how to support the safe and responsible adoption of AI and machine learning (ML) in financial services from a macro-financial and prudential perspective, given the potential benefits – including driving innovation – that AI/ML could bring to firms. The letter states that the BoE and the PRA have a close interest in encouraging the safe and responsible adoption of AI and ML in financial services, given the potential risks this could pose to the BoE’s and the PRA’s objectives. There are four key areas of focus that the BoE and PRA have been exploring, where further clarification on their regulatory framework could be beneficial and which are relevant to AI and ML: Data Management.Model Risk Management.Governance.Operational Resilience and Third Party Risks. The BoE and PRA are planning to run the third instalment of their ‘ML in UK financial services’ survey, to ensure their understanding of AI/ML adoption remains up to date. In addition, given the rapid pace of innovation and widespread use cases, they are undertaking deeper analysis on the potential financial stability implications of AI/ML over the course of 2024, and that analysis will be considered by the Financial Policy Committee. See our update for more information on the BoE and PRA response. |
ICO | In its strategic update, the ICO emphasises the potential benefits AI can bring across sectors, and also highlights that there are inherent risks associated with AI, concerning, for example, transparency, security, and fairness. It stresses that data protection law is risk-based; it requires these risks to be mitigated and managed via technical and organisational measures, but not necessarily completely removed. The ICO highlights specific focus areas: foundation models, high-risk AI applications (including emotion recognition technology, on which they have previously issued a warning), facial recognition technology, biometrics, and AI’s impact on children (highlighting its recent children’s code strategy). It also sets out how the Principles map to data protection law principles, obligations, and rights. The ICO highlights recent enforcement action and emphasises that it will use its “full regulatory toolbox”, including enforcement notices and monetary penalty notices. It also highlights existing guidance on AI and data protection and automated decision-making and profiling. Consultations on updates are expected from spring 2025. It will also consult on biometric classification in spring 2024, and will continue its consultation series on generative AI. Collaboration with stakeholders and regulators remains a key focus for the ICO in shaping AI regulation and policies. |
Ofcom | Ofcom’s strategic approach to AI 2024/2025 sets out three key risk areas and its work to date to address those risks: Synthetic media: AI tools that can be used to generate synthetic media can be used to create content that depicts child sexual abuse, acts of terrorism, and non-consensual pornography. Use of AI to personalise content, leading to amplification of illegal and harmful content. More advanced forms of AI, like GenAI, being used to develop more virulent malware, identify vulnerabilities in networks, and/or provide instructions on how to breach network security. Its work to address this risk includes publishing the draft Illegal Harms Code of Practice, publishing research to understand adoption and attitudes towards GenAI, and engaging with standards bodies like the European Telecommunications Standards Institute and the International Organisation for Standardisation. To address these risks, Ofcom has over 100 technology experts (with approximately 60 AI experts) in its data and technology teams, including some with direct experience of developing AI tools. It is building strategic partnerships with academic institutions, developing data skills across Ofcom, and cooperating with other through the DRCF and internationally. Its planned AI work spans Online Safety, broadcasting, telecoms, and crosscutting initiatives. In Online Safety, its planned actions include researching the merits of red teaming to detect vulnerabilities in AI models and tools, the merits of synthetic media detection tools, and the merits of automated content classifiers. It will also research the merits of using GenAI for content moderation, as well as potential methods for preventing children from encountering GenAI pornographic content. Cross-cutting actions include monitoring and engaging with AI developments internationally, including the EU’s AI Act, horizon scanning to identify emerging and longer-term AI developments, and engaging with the government, including through its Central AI Risk Function. |
ONR | The ONR pro-innovation approach to regulating AI in the nuclear sector sets out the ONRs strategic approach and states that the ONR is already well aligned with the Principles. The ONR has established an AI-focused team of specialist safety and security inspectors, alongside the ONR innovation hub, and commissioning research to explore the suitability of ONR’s current approach to regulating AI. The ONR sets out its programme of work as follows: taking part in a five-year UK Research and Innovation (UKRI)-funded project alongside universities, nuclear site licensees and other regulators, focusing on the development of robotics for the UK nuclear industry and how to structure associated safety arguments;developing regulatory sandboxing as a tool for duty holders and regulators to test innovative technologies in collaboration while exploring the suitability of the existing regulatory framework;updating guidance for ONR inspectors, supported by early engagement with duty holders to foster open dialogue and explain our regulatory expectations;regularly engaging with international partners and UK regulators, sharing best practice and collaborating to build regulatory capability; andproviding regulatory advice to duty holders and the wider nuclear sector. |
Ofgem | Ofgem’s response emphasises that AI is already being used by the energy sector across England, Scotland and Wales (Great Britain). It can create efficiencies for energy bill payers, energy suppliers and the wider energy sector, but it can also create challenges and risks. AI could play a big part in decarbonising the energy sector, for example it can be used to better predict weather that can help improve solar generation forecasts. Ofgem recently put out a call for input on use of AI within the energy sector, which closed on 17 May 2024. Ofgem’s recommendations in the call for input include: collaboration with the Office of AI (part of DSIT) and relevant regulators such as the CMA, the ICO, the EHRC and the HSE, along with other relevant stakeholders, including through establishment of an AI Best Practice Cross Industry Forum;considering the potential for AI collusion in energy markets, by continuing discussions with the Competition and Markets Authority and energy sector regulators in other jurisdictions;ensuring interoperability with international markets is considered, with any issues and challenges effectively addressed in a joined-up manner by UK and international regulators;considering the need for additional regulatory tools, such as an AI sandbox;consider the issue of liability and the AI supply chain to ensure effective measures are in place to regulate the sector; anddeveloping specific guidance for the sector, through which it anticipates minimising the need for formal intervention. Ofgem’s proposed next steps are to: analyse the findings from the call for input and update its approach to regulating AI; develop AI guidance for energy bill payers, energy companies and organisations; continue to work with companies and organisations across the energy sector; andcontinue to research and identify the opportunities and threats associated with use of AI in the energy sector. |
EHRC | The EHRC’s response highlights that it is a small strategic regulator in comparison to many of the other regulators being called upon to play a part in regulating AI. It states that its budget has remained static at £17.1 million since 2016 with the consequence that its ability to scale up and respond to the risks to equality and human rights presented by AI is limited. The approach it outlines reflects the need to prioritise its work. The EHRC prioritised AI in its 2022–25 strategic plan. It has already issued guidance on AI and the Public Sector Equality Duty and has undertaken focused work on specific issues, including exploring the potential discrimination as a result of online recruitment through social media platforms and supporting litigation to challenge the potential discriminatory use of facial recognition technology (FRT) in the workplace. It is partnering with the Responsible Technology Adoption Unit (RTAU), Innovate UK, and the ICO to drive the development of new socio-technical solutions to address bias and discrimination in AI systems. The EHRC confirms its broad support for the Principles, and highlights that all are relevant to the equality and human rights frameworks. It also highlights its unique role in supporting the fairness principle, both with regulated bodies and across the regulatory community. It is participating in the Fairness Innovation Challenge, alongside the ICO and the RTAU. It has also taken part in a workshop with the DRCF to explore fairness across regulatory remits. In 2024–25 the EHRC will focus predominantly on: reducing and preventing digital exclusion, particularly for older and disabled people in accessing local services;the use of AI in Recruitment Practices; developing solutions to address bias and discrimination in AI systems;use of FRT by the police and elsewhere; andpartnering with the RTAU on the fairness innovation challenge to develop tools for tackling algorithmic bias and discrimination. |
Our take
The next government may choose to push forward AI legislation quickly, or continue with the current approach of regulating AI through the existing framework. Whichever approach is taken forward, the responses above will provide a valuable reference point for organisations’ AI governance programmes; these regulators will continue to lead the way on both providing guidance and on enforcement in either scenario.
As is to be expected, DRCF members, the CMA, FCA, and ICO, provided responses demonstrating the significant steps they have already taken and plan to take on regulating AI.
The responses of most regulators highlighted resource constraints and the need to prioritise notwithstanding the £10 million earmarked to upskill all of the relevant regulators to address the challenges of regulating AI.
Key themes for prioritisation include foundation models, technologies that pose threats to fairness like facial recognition, AI’s impact on children, and ensuring competition concerns are addressed.
While regulators’ areas of focus vary according to their remit, there is a significant degree of alignment and overlap. The Principles provide a shared framework around which regulators must shape their AI activities. AI governance may appear a daunting task, as it requires multiple regulatory regimes to be addressed. However, organisations can take comfort from the broad similarities in regulatory approaches. Addressing the Principles for a particular use case will not require a distinct set of actions for each in-scope regime. A holistic approach can and should be taken, considering whether additional actions are required when looking through any of the regulatory lenses focused on the technology.