Opinion

Should we feel more excited or scared about AI? Or both!

By on

Recent world events have brought home just how lucky we are if we can expect to stay safe. Fortunately, most of us do not live in places like Gaza or Ukraine. But conflicts like these can spill over, with their ripples felt worldwide, not to mention the risks of nuclear escalation.


The world as we have known it for 50 years no longer feels as certain, but uncertainty has always been the only thing we can rely on.

The same could be said of technological change. Combine the potential of artificial intelligence (AI) with modern-day weaponry and we could quickly find ourselves in a very dangerous and unpredictable situation which is very difficult to control.

The day-to-day risks AI poses to all of us also won’t be restricted to a few geographical hotspots. The truth is, AI has the potential to upend everything, everywhere and forever.

Mike Robinson: "Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world."

We’ve heard a lot about the threat of both automation and technology to jobs – after all, it’s a fear that has been with us since industrialisation. Thus far, however, while the nature of the work has changed, job numbers have continued to grow. But could AI be different? This technology could in theory not just replace us but take charge of our lives.

And there are other even more insidious and all-pervasive risks from AI and other online technologies which we are just beginning to face up to and attempt to control.
The Online Safety Act became law at the end of October, following years of debate and delay. It enters the statute book nearly two decades after smart phones and social media began to pose a risk to the safety of children, or vulnerable adults, and a full six years since the tragic death of Molly Russell.

Molly died, aged 14, having spent at least a year of her life digesting content about suicide, self-harm, and depression. Speaking at the time of the inquest, Molly’s father Ian said: “It’s a world I don’t recognise. It’s a ghetto of the online world that once you fall into it, the algorithm means you can’t escape it and it keeps recommending more content.”

Are we in danger of falling into a similar trap with AI? Are we destined to be ruled and enslaved by its ability to predict and shape what we think, want, and even feel?
For the first time, the new UK Online Safety Act puts the onus on online content providers and social media companies to prevent and remove illegal content as well as ensure children can’t access pornography or material which promotes self-harm, bullying or eating disorders. This should be welcomed by anyone who believes in people’s right to live free from harm, especially children.

Currently, however, no such legislation or regulations exist around the development of new forms of AI. It’s why the summit being held this month in the UK is important, as is the new AI Safety Institute, which the Prime Minister has announced.

In our sector, AI is starting to transform the way we do risk assessments, it could identify and spot hazards, and even predict and stop incidents before they occur. AI can also be used to replace humans in highly dangerous or hazardous environments, with robots, drones or other machines. All of which could be extremely beneficial.

But it also poses new challenges. AI can reinforce biases and prejudices, which could lead to bad health and safety judgements – exaggerating some risks, downplaying others or even missing them altogether. People will still need to work alongside AI-controlled machinery, and we must make sure they remain in charge of the machines, not the other way around!

Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world. Otherwise, we could be sleepwalking into a very unpredictable and unpleasant future.

Mike Robinson FCA is chief executive of the British Safety Council

OPINION


Dr Jamie OHalloran IPPR

Alcohol – the negative impact on work and workplaces

By Dr Jamie O’Halloran, IPPR on 02 January 2026

New IPPR research shows that most employees expect their employer to play an active role in reducing alcohol harm. Senior staff, in particular, believe employers have an even greater responsibility. Yet in practice, many employees say they do not see their employer taking meaningful steps to minimise harm.



Mike Robinson 3 Med

Beyond 2025: The journey towards a safer world

By Mike Robinson FCA, British Safety Council on 22 December 2025

As 2025 draws to a close, we reflect on a year of success and safety. It also allows us to look forward to the coming year, recognising that with each new year comes the unbridled hope and opportunity to create the safest year in human history for the workers of the world. 



John Robinson Schofield Sweeney Bw

How to create a neuroinclusive workplace

By John Robinson, Schofield Sweeney on 09 December 2025

The modern workplace is a diverse environment. Most workforces will be made up of individuals representing the majority of the groups protected under the Equality Act 2010.