Opinion

Should we feel more excited or scared about AI? Or both!

By on

Recent world events have brought home just how lucky we are if we can expect to stay safe. Fortunately, most of us do not live in places like Gaza or Ukraine. But conflicts like these can spill over, with their ripples felt worldwide, not to mention the risks of nuclear escalation.


The world as we have known it for 50 years no longer feels as certain, but uncertainty has always been the only thing we can rely on.

The same could be said of technological change. Combine the potential of artificial intelligence (AI) with modern-day weaponry and we could quickly find ourselves in a very dangerous and unpredictable situation which is very difficult to control.

The day-to-day risks AI poses to all of us also won’t be restricted to a few geographical hotspots. The truth is, AI has the potential to upend everything, everywhere and forever.

Mike Robinson: "Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world."

We’ve heard a lot about the threat of both automation and technology to jobs – after all, it’s a fear that has been with us since industrialisation. Thus far, however, while the nature of the work has changed, job numbers have continued to grow. But could AI be different? This technology could in theory not just replace us but take charge of our lives.

And there are other even more insidious and all-pervasive risks from AI and other online technologies which we are just beginning to face up to and attempt to control.
The Online Safety Act became law at the end of October, following years of debate and delay. It enters the statute book nearly two decades after smart phones and social media began to pose a risk to the safety of children, or vulnerable adults, and a full six years since the tragic death of Molly Russell.

Molly died, aged 14, having spent at least a year of her life digesting content about suicide, self-harm, and depression. Speaking at the time of the inquest, Molly’s father Ian said: “It’s a world I don’t recognise. It’s a ghetto of the online world that once you fall into it, the algorithm means you can’t escape it and it keeps recommending more content.”

Are we in danger of falling into a similar trap with AI? Are we destined to be ruled and enslaved by its ability to predict and shape what we think, want, and even feel?
For the first time, the new UK Online Safety Act puts the onus on online content providers and social media companies to prevent and remove illegal content as well as ensure children can’t access pornography or material which promotes self-harm, bullying or eating disorders. This should be welcomed by anyone who believes in people’s right to live free from harm, especially children.

Currently, however, no such legislation or regulations exist around the development of new forms of AI. It’s why the summit being held this month in the UK is important, as is the new AI Safety Institute, which the Prime Minister has announced.

In our sector, AI is starting to transform the way we do risk assessments, it could identify and spot hazards, and even predict and stop incidents before they occur. AI can also be used to replace humans in highly dangerous or hazardous environments, with robots, drones or other machines. All of which could be extremely beneficial.

But it also poses new challenges. AI can reinforce biases and prejudices, which could lead to bad health and safety judgements – exaggerating some risks, downplaying others or even missing them altogether. People will still need to work alongside AI-controlled machinery, and we must make sure they remain in charge of the machines, not the other way around!

Guardrails are needed, but so is detailed, thoughtful regulation which sets out how AI should work in practice in the real world. Otherwise, we could be sleepwalking into a very unpredictable and unpleasant future.

Mike Robinson FCA is chief executive of the British Safety Council

OPINION


Blue AI Figures iStock XH4D

The impact of AI on health and safety prosecutions and sentencing

By Laura White and Sasha Jackson, Pinsent Masons on 12 June 2024

From undertaking hazardous activities, to identifying and predicting risk, to continuous monitoring, the use of artificial intelligence (AI) has the potential to bring about significant change in workplace health and safety, but not without associated implications for prosecution and sentencing.



Air Conditioning Unit iStock Kira Tan

Clean indoor air in public buildings: can this be achieved?

By Professor Lidia Morawska, Queensland University of Technology on 12 June 2024

Despite decades of effort by many experts and a large body of evidence about the scale of the problem, the topic of indoor air quality (IAQ) in public buildings has attracted little attention beyond readers of professional journals where papers on indoor air pollution and its impacts are published. The Covid-19 pandemic changed this.



Air Pollution Image iStock Oversnap

Improving air quality, now and in the future

By Matthew Clark, CIEH on 10 June 2024

As a member of the Healthy Air Coalition, CIEH has argued for the UK Government and devolved nations to adopt more ambitious air quality targets that meet WHO air quality guidelines, and implement a holistic regulatory framework that supports local authorities with the capacity to enforce air quality targets.