Hero Backdrop

AI revolution meets regulation: how the Health and Safety Executive is regulating AI

We explore how AI can enhance workplace health and safety.

Published on:
Reading time: 6 minutes read

The development of artificial intelligence (“AI”) has been transformative in so many areas of business. Weightmans’ new arrival in the health & safety team, Rowenna Allen, discusses the potential applications of AI in managing safety within your business, and how this might be regulated by the Health and Safety Executive (“HSE”).

In 2023, the UK government released a white paper titled ‘a pro-innovation approach to AI regulation’, outlining how AI will be regulated in the UK (with a subsequent government response being published in February 2024). This paper highlights that the UK is home to around one third of Europe’s total AI companies, and twice as many as any other country in Europe, emphasising the complexity of the task of regulation. The paper cites an example that AI can be used to improve safety and efficiency in mining; it can replace humans in high-risk environments. The paper also discusses how “AI has revolutionised large-scale safety-critical practices in industry, like controlling the process of nuclear fusion”.

The Health and Safety at Work etc. Act 1974 is goal-setting (as opposed to being prescriptive); it applies regardless of the AI technology used. The legislation is goal-setting in the sense that the regulations do not outline precisely how things should be done, just that reasonable practicability should be at the forefront. The Act provides duty holders with the freedom to meet regulatory requirements in the most cost-effective way, allowing companies to apply new technologies to control risks.

There are numerous practical applications for AI in a health and safety context, some examples being:

  • Replacing human workers in high-risk settings – robots can undertake more dangerous tasks involving heavy lifting or handling toxic chemicals.
  • Smart cameras monitoring and flagging near misses or high-risk areas.
  • Wearable devices offering insights into employees’ posture, heart rate, fatigue or stress levels.
  • Sensor technologies flagging and preventing potential collisions.
  • Facial recognition / biometric data such as fingerprints monitoring site access.

These systems can analyse data and recognise patterns to predict and prevent workplace incidents, allowing employers to foster safer environments, enhance safety thorough training and respond to hazards more efficiently.

How is the HSE regulating AI?

The UK government’s white paper has outlined that there won’t be a single AI regulator. Instead, regulators will be responsible for regulation of AI within their respective industries. This means that the HSE will be responsible for regulating AI within its regulatory remit of health and safety at work.

The white paper lays out several cross-sectoral principles which guide regulators to help ensure safe, responsible AI innovation. These principles are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

The HSE will interpret and apply these, and has acknowledged that principles most relevant to the regulator are:

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Accountability and governance.

The white paper outlines that “it will be important for all regulators to assess the likelihood that AI could pose a risk to safety in their sector or domain, and take a proportionate approach to managing it”. In response to this, the HSE has outlined that it regulates AI in a way that aligns with its missions and priorities. The regulator’s approach encompasses:

  • Regulating use of AI where it impacts on health and safety in workplaces where the HSE is the enforcing authority.
  • Regulating the use of AI in design, manufacture and supply of workplace machinery, equipment and products for use in the workplace as a Market Surveillance Authority under the Product Safety regulatory framework.
  • Regulating where AI impacts on the HSE’s role to protect people and places, including building safety, chemicals and pesticides regulation.

The HSE recently launched a research project to consider AI’s impact on health and safety across the HSE’s regulated industries, requesting “expertise on industry examples of AI use”. This survey closed on 4 October 2024 and the HSE is developing a database of uses of AI in the workplace, gathering information on uses of AI that are being trialled and used in such HSE regulated sectors. This will help the HSE consider risks and opportunities of AI use in industrial settings.

The HSE has adopted an ever-evolving approach in which it will work with stakeholders as such technology develops. The HSE has outlined several areas in which it will continue to develop its regulatory approach – including:

  • Co-ordinating work and sharing knowledge through an AI common interest group.
  • Working with government departments to shape the regulatory approach.
  • Supporting the standards-making process to establish benchmarks for AI interaction with machinery and functional safety.
  • Establishing relationships with industry and academic stakeholders to share knowledge and learning.
  • Collaborating with other regulators.
  • Identifying AI developments of interest through horizon scanning.
  • Building capability and experience across specialist and scientific areas.
  • Supporting research bids that align with the HSE’s areas of research interest.
  • Setting up and trialling an Industrial Safetytech Regulatory Sandbox.

Risk management and other issues

It is important that thorough risk assessments are undertaken when implementing new technologies, and the HSE expects risk assessments for such AI technology. Employers have a responsibility to take all reasonably practicable measures to eliminate or mitigate foreseeable risks. It is vital that over-reliance on AI does not lead to complacency; human oversight will remain fundamental. In order to effectively risk assess, employers will need to ensure that they fully understand the AI that they are implementing.

Data protection concerns will arise where sensitive biometric data is used (such as facial recognition, fingerprint sensors and health-related data), and it is important to consider how use of this technology could impact employee trust. Processing of personal data involved in the training, design and use of such systems must be compliant with the UK GDPR and Data Protection Act.

Employers will also need to consider employee attitude towards such technology. Over-reliance on technology could lead to concerns around job retention, or if employees feel overly-monitored, excessive dependency on AI could negatively impact on company culture. AI also poses a bias risk, and the government’s white paper discusses risks and concerns about potential for bias and discrimination. Unaccounted-for bias in AI-automated processes could result in discriminatory outcomes – for example, systems assessing safety risks could over or under-estimate risk to certain groups based on biased or incomplete historical data.

It is also unclear how use of AI technology could impact on both enforcement and sentencing in relation to health and safety offences. Cost-cutting at the expense of safety is expressly included as an aggravating factor in the sentencing guidelines, so using technology at the expense of safety could worsen a potential sentence. The courts’ and HSE’s approach to enforcement and sentencing where AI has played a role will undoubtedly be tested through the courts in the coming years.

Final thoughts for businesses

Overall, it is certainly exciting that AI provides the opportunity to improve workplace health and safety. Early adoption of AI could enable competitive advantages and drive innovation; difficulties arise where AI is advancing so quickly that it could prove challenging to regulate, especially in novel cases involving health and safety offences. It will be particularly important for duty holders using AI in their operations to demonstrate due diligence and a thorough understanding of the technology.

The HSE has indicated it will continue developing its regulatory approach to AI; companies looking to embrace this technology should pay close attention to inform their implementation strategies.

 

Read the white paper here and government response here.

For more information on the opportunity to improve workplace health and safety, contact our health and safety solicitors.

Did you find this article useful?

Written by:

Photo of Anna Naylor

Anna Naylor

Partner

Anna is a regulatory defence lawyer specialising in advising organisations on health and safety, food safety, trading standards and environmental law.

Related Services:

Related Sectors: