Opinion

Should we feel more excited or scared about AI? Or both!

By on

Recent world events have brought home just how lucky we are if we can expect to stay safe. Fortunately, most of us do not live in places like Gaza or Ukraine. But conflicts like these can spill over, with their ripples felt worldwide, not to mention the risks of nuclear escalation.


The world as we have known it for 50 years no longer feels as certain, but uncertainty has always been the only thing we can rely on.

The same could be said of technological change. Combine the potential of artificial intelligence (AI) with modern-day weaponry and we could quickly find ourselves in a very dangerous and unpredictable situation which is very difficult to control.

The day-to-day risks AI poses to all of us also won’t be restricted to a few geographical hotspots. The truth is, AI has the potential to upend everything, everywhere and forever.

Mike Robinson: "Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world."

We’ve heard a lot about the threat of both automation and technology to jobs – after all, it’s a fear that has been with us since industrialisation. Thus far, however, while the nature of the work has changed, job numbers have continued to grow. But could AI be different? This technology could in theory not just replace us but take charge of our lives.

And there are other even more insidious and all-pervasive risks from AI and other online technologies which we are just beginning to face up to and attempt to control.
The Online Safety Act became law at the end of October, following years of debate and delay. It enters the statute book nearly two decades after smart phones and social media began to pose a risk to the safety of children, or vulnerable adults, and a full six years since the tragic death of Molly Russell.

Molly died, aged 14, having spent at least a year of her life digesting content about suicide, self-harm, and depression. Speaking at the time of the inquest, Molly’s father Ian said: “It’s a world I don’t recognise. It’s a ghetto of the online world that once you fall into it, the algorithm means you can’t escape it and it keeps recommending more content.”

Are we in danger of falling into a similar trap with AI? Are we destined to be ruled and enslaved by its ability to predict and shape what we think, want, and even feel?
For the first time, the new UK Online Safety Act puts the onus on online content providers and social media companies to prevent and remove illegal content as well as ensure children can’t access pornography or material which promotes self-harm, bullying or eating disorders. This should be welcomed by anyone who believes in people’s right to live free from harm, especially children.

Currently, however, no such legislation or regulations exist around the development of new forms of AI. It’s why the summit being held this month in the UK is important, as is the new AI Safety Institute, which the Prime Minister has announced.

In our sector, AI is starting to transform the way we do risk assessments, it could identify and spot hazards, and even predict and stop incidents before they occur. AI can also be used to replace humans in highly dangerous or hazardous environments, with robots, drones or other machines. All of which could be extremely beneficial.

But it also poses new challenges. AI can reinforce biases and prejudices, which could lead to bad health and safety judgements – exaggerating some risks, downplaying others or even missing them altogether. People will still need to work alongside AI-controlled machinery, and we must make sure they remain in charge of the machines, not the other way around!

Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world. Otherwise, we could be sleepwalking into a very unpredictable and unpleasant future.

Mike Robinson FCA is chief executive of the British Safety Council

OPINION


Sarah Lyons

Asbestos in schools: we urgently need a properly funded removal programme to reduce the risk to pupils and staff

By Sarah Lyons, National Education Union (NEU) on 03 September 2025

The National Education Union (NEU) was formed in 2017 from an amalgamation of the National Union of Teachers (NUT) and the Association of Teachers and Lecturers (ATL). In total we have around half a million members, who work as teachers, support staff and leaders.



Charles Pickles

Why a fourth wave of British asbestos deaths is imminent

By Charles Pickles, Airtight on Asbestos on 02 September 2025

Although in past years asbestos-related mesothelioma deaths have been associated with exposures among construction and related trades working on the fabric of buildings, emerging evidence suggests that future cases of mesothelioma will be dominated by those exposed as children and teachers in schools in recent years, today and in the future.



Istock 000019788268 Double Credit Chrispole

Asbestos: the public’s right to know

By Richard Blunt, Mesothelioma UK on 02 September 2025

Why we urgently need to improve training standards and awareness about asbestos exposure.