Recent world events have brought home just how lucky we are if we can expect to stay safe. Fortunately, most of us do not live in places like Gaza or Ukraine. But conflicts like these can spill over, with their ripples felt worldwide, not to mention the risks of nuclear escalation.
Opinion
Should we feel more excited or scared about AI? Or both!
The world as we have known it for 50 years no longer feels as certain, but uncertainty has always been the only thing we can rely on.
The same could be said of technological change. Combine the potential of artificial intelligence (AI) with modern-day weaponry and we could quickly find ourselves in a very dangerous and unpredictable situation which is very difficult to control.
The day-to-day risks AI poses to all of us also won’t be restricted to a few geographical hotspots. The truth is, AI has the potential to upend everything, everywhere and forever.
Mike Robinson: "Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world."
We’ve heard a lot about the threat of both automation and technology to jobs – after all, it’s a fear that has been with us since industrialisation. Thus far, however, while the nature of the work has changed, job numbers have continued to grow. But could AI be different? This technology could in theory not just replace us but take charge of our lives.
And there are other even more insidious and all-pervasive risks from AI and other online technologies which we are just beginning to face up to and attempt to control.
The Online Safety Act became law at the end of October, following years of debate and delay. It enters the statute book nearly two decades after smart phones and social media began to pose a risk to the safety of children, or vulnerable adults, and a full six years since the tragic death of Molly Russell.
Molly died, aged 14, having spent at least a year of her life digesting content about suicide, self-harm, and depression. Speaking at the time of the inquest, Molly’s father Ian said: “It’s a world I don’t recognise. It’s a ghetto of the online world that once you fall into it, the algorithm means you can’t escape it and it keeps recommending more content.”
Are we in danger of falling into a similar trap with AI? Are we destined to be ruled and enslaved by its ability to predict and shape what we think, want, and even feel?
For the first time, the new UK Online Safety Act puts the onus on online content providers and social media companies to prevent and remove illegal content as well as ensure children can’t access pornography or material which promotes self-harm, bullying or eating disorders. This should be welcomed by anyone who believes in people’s right to live free from harm, especially children.
Currently, however, no such legislation or regulations exist around the development of new forms of AI. It’s why the summit being held this month in the UK is important, as is the new AI Safety Institute, which the Prime Minister has announced.
In our sector, AI is starting to transform the way we do risk assessments, it could identify and spot hazards, and even predict and stop incidents before they occur. AI can also be used to replace humans in highly dangerous or hazardous environments, with robots, drones or other machines. All of which could be extremely beneficial.
But it also poses new challenges. AI can reinforce biases and prejudices, which could lead to bad health and safety judgements – exaggerating some risks, downplaying others or even missing them altogether. People will still need to work alongside AI-controlled machinery, and we must make sure they remain in charge of the machines, not the other way around!
Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world. Otherwise, we could be sleepwalking into a very unpredictable and unpleasant future.
Mike Robinson FCA is chief executive of the British Safety Council
OPINION
Ill-fitting PPE is more than uncomfortable – it’s unsafe
By CIOB (Chartered Institute of Building) and #PPEThatFits campaign on 08 May 2026
For years, workers across several industries have relied on PPE which was never designed to fit them properly, putting both their comfort and their safety at risk. The CIOB’s #PPEThatFits campaign and a new British Standard are now driving a long-overdue shift towards inclusive protective equipment.
Bullying, sexual harassment and discrimination at work – how to embed a ‘prevention not cure’ approach
By Annie Gray, Schofield Sweeney law firm on 05 May 2026
Changes to employment legislation relating to sexual harassment at work, which came into force in October 2024, imposed a new ‘preventative’ legal duty on employers, requiring them to be more proactive in preventing sexual harassment from occurring.
One size fits none: why inclusive safety matters
By Mike Robinson FCA, British Safety Council on 05 May 2026
Equality and inclusion are often discussed in workplaces as matters of culture or values. In health and safety, they are matters of life and death, health and dignity.