Opinion

Should we feel more excited or scared about AI? Or both!

By on

Recent world events have brought home just how lucky we are if we can expect to stay safe. Fortunately, most of us do not live in places like Gaza or Ukraine. But conflicts like these can spill over, with their ripples felt worldwide, not to mention the risks of nuclear escalation.


The world as we have known it for 50 years no longer feels as certain, but uncertainty has always been the only thing we can rely on.

The same could be said of technological change. Combine the potential of artificial intelligence (AI) with modern-day weaponry and we could quickly find ourselves in a very dangerous and unpredictable situation which is very difficult to control.

The day-to-day risks AI poses to all of us also won’t be restricted to a few geographical hotspots. The truth is, AI has the potential to upend everything, everywhere and forever.

Mike Robinson: "Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world."

We’ve heard a lot about the threat of both automation and technology to jobs – after all, it’s a fear that has been with us since industrialisation. Thus far, however, while the nature of the work has changed, job numbers have continued to grow. But could AI be different? This technology could in theory not just replace us but take charge of our lives.

And there are other even more insidious and all-pervasive risks from AI and other online technologies which we are just beginning to face up to and attempt to control.
The Online Safety Act became law at the end of October, following years of debate and delay. It enters the statute book nearly two decades after smart phones and social media began to pose a risk to the safety of children, or vulnerable adults, and a full six years since the tragic death of Molly Russell.

Molly died, aged 14, having spent at least a year of her life digesting content about suicide, self-harm, and depression. Speaking at the time of the inquest, Molly’s father Ian said: “It’s a world I don’t recognise. It’s a ghetto of the online world that once you fall into it, the algorithm means you can’t escape it and it keeps recommending more content.”

Are we in danger of falling into a similar trap with AI? Are we destined to be ruled and enslaved by its ability to predict and shape what we think, want, and even feel?
For the first time, the new UK Online Safety Act puts the onus on online content providers and social media companies to prevent and remove illegal content as well as ensure children can’t access pornography or material which promotes self-harm, bullying or eating disorders. This should be welcomed by anyone who believes in people’s right to live free from harm, especially children.

Currently, however, no such legislation or regulations exist around the development of new forms of AI. It’s why the summit being held this month in the UK is important, as is the new AI Safety Institute, which the Prime Minister has announced.

In our sector, AI is starting to transform the way we do risk assessments, it could identify and spot hazards, and even predict and stop incidents before they occur. AI can also be used to replace humans in highly dangerous or hazardous environments, with robots, drones or other machines. All of which could be extremely beneficial.

But it also poses new challenges. AI can reinforce biases and prejudices, which could lead to bad health and safety judgements – exaggerating some risks, downplaying others or even missing them altogether. People will still need to work alongside AI-controlled machinery, and we must make sure they remain in charge of the machines, not the other way around!

Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world. Otherwise, we could be sleepwalking into a very unpredictable and unpleasant future.

Mike Robinson FCA is chief executive of the British Safety Council

OPINION


Scared Woman on Phone iStock SensorSpot

Lone worker monitoring technology: is it for safety or surveillance?

By Hayden Singh, Pinsent Masons on 03 May 2024

As businesses make increasing use of safety monitoring to minimise the risks associated with lone working, consideration must be given to the impact on workers. Concerns that safety monitoring measures will be used for performance management purposes may well increase rather than alleviate health and safety risks.



Mike Robinson 3 Med

Why our world needs safety more than ever

By Mike Robinson FCA, British Safety Council on 30 April 2024

Our safety, in every sense of the word, is very much in the news right now. Whether it’s wars in Europe or the Middle East, threats from terrorism or rogue states, the need to keep children safe online, or violence faced by shop keepers, the numbers and level of threats to our safety seem to rise exponentially every day.



Usdaw Freedom From Fear

Violence against shop workers grows in an epidemic of retail crime

By Paddy Lillis, Usdaw on 26 April 2024

Violence against shop workers has more than doubled in a year, according to Usdaw’s latest annual survey, as official figures show that shoplifting has risen by over a third. Shoplifting is not a victimless crime; theft from shops has long been a major flashpoint for violence and abuse against shop workers.