AI-powered systems, digitalisation and automation offer great potential for improving efficiency and driving innovation at work and in society, but digitalised work practices also pose a significant threat to employee wellbeing.
Opinion
Managing the risks of digitalised working
“I was the future once”. So said David Cameron in 2016, ending his time as Prime Minister with a quip about himself that he’d levelled at his rival Tony Blair a decade earlier. A useful reminder that – like politics – the future never stands still.
History can provide useful context to help us navigate the current technological hype cycle that would have us believe that advancements such as computer vision, autonomous vehicles, or artificial superintelligence (ASI) are all just around the corner. The potential benefits can’t be ignored, but we must be alert to the risks that come with them.
There can be no doubt that new technologies are firmly on the radar of occupational safety and health (OSH) professionals. The focus of the European Agency for Safety and Health at Work’s (EU-OSHA’s) World Day for Safety and Health at Work 2025 was on the impact of digitalisation and artificial intelligence on occupational safety and health at work, part of its broader campaign on Safe and healthy work in the digital age.
David Sharp: "There is a much greater awareness of the impact of algorithmic working on everyone involved in the process of designing, building and delivering algorithmic tools – not just the workers who use them."
The International Labor Organization (ILO) is alive to the way digitalisation is transforming safety and health at work through automation and advanced robotics, ‘smart tools’ and monitoring systems, and the use of extended and virtual reality for training. The ILO illustrates the benefits and risks of these in its must-read 2025 global report Revolutionizing Health and Safety: The role of AI and digitalization at work.
However, it’s the two other issues in that report – the algorithmic management of work, and the psychosocial impact associated with digital work – that hit home greatest for me. Simply put, ordinary workers face far greater exposure to risk in relation to these everyday workplace technologies than they do to less frequently used tools and applications such as wearables, remote sensors, automated equipment or robots.
The tools I have in mind here are ones that use algorithms to complete work or allocate tasks more efficiently, or that reduce or augment manual input by automating output. A considerable body of academic research has been undertaken into issues such as bias, fairness, accountability, equity and privacy in algorithmic systems. But what about the impact of these tools on workers’ wellbeing?
How can we make the impact these everyday workplace technologies have on workers more visible? How can we assess and manage the risks? How should we regulate them?
Regulation of advanced AI-driven workplace technologies
I was struck by the parallel between the regulation of these advanced AI-driven workplace technologies, and that of the visual display units (VDUs) and workstations of the 1980s and 90s. I wondered what we might learn? After all, they were the future once.
Photograph: iStock
The European Directive setting out rules for the use of display screen equipment (90/270/EEC) came into force almost 35 years ago to the day. Among the ‘six pack’ of EC Directives launched at the time, it was the one that caused the greatest stir in the UK. There were claims of regulatory overreach, with Lord Ardwick arguing “it was not sufficiently established that the health risks were heavy enough to cause concern or that such risks could not be met by good office management”. His Select Committee concluded it “did not believe that there was justification for establishing a formal system of breaks from VDU work”.
While the unions were supportive of new regulation, many in industry were critical, citing the requirement for free eyesight provisions that effectively passed on the cost to employers previously borne by the NHS. The directive brought further requirements for employers to risk assess and analyse workers’ use of display screen equipment; provide them with training in how to use it; and to take workers’ mental health into account when planning work activities.
Looking back now, it’s hard to see how those rules could have been considered controversial. But they were quite interventionist at the time.
How radical should we be now, in seeking to mitigate the negative impacts of digitalised work practices on employee wellbeing?
It’s important to recognise there’s a serious problem that needs addressing.
Research clearly indicates that while digitalisation of work can be beneficial, it can also have a negative impact on workers. Rather than eliminating dull, dirty or dangerous work, a comprehensive multi-year UK study found this year that task automation can often lead to “routinisation, intensification and a lower level of discretion”, impacting worker wellbeing.
These findings echo those of a 2024 report: Does Technology Use impact UK Workers’ Quality of Life? A report on worker wellbeing, which notes negative impacts such as fears over job security, feelings of purposelessness and a range of ill-health conditions including anxiety, depression and burnout. A 2024 survey of US and UK workers found that 59 per cent of respondents identified digital distraction as a contributory factor in workplace stress, rising to 71 per cent among managers.
These studies rightly frame harms in terms of the potential impact on users of new technologies – in the same way the European Community sought to regulate the risks of workstation use 35 years ago.
However, there is also now a much greater awareness of the impact of algorithmic working on everyone involved in the process of designing, building and delivering algorithmic tools – not just the workers who use them. That includes what leading academic Kate Crawford describes as a huge invisible chain of “resource extraction, human labour and algorithmic processing across networks of mining, logistics, distribution, processing, prediction and optimisation”.
Too often, the labour involved in the extraction and trade of these minerals, and the manufacture and assembly of the product, is invisible: poorly paid, unregulated and often dangerous, from data labelling done by ‘clickworkers’ at a fraction of a cent per task, to itinerant workers cleaning up toxic waste dumps when minerals have been extracted.
Regulation remains an important tool to manage the risk of digitalised working. But coordinated policymaking and education initiatives – making the people visible, so that wider impacts can be assessed – are increasingly important in a field where global supply chains span legislative borders. We should be cautious not to fall for the hype behind the AI machine, which requires a careful assessment of risk and reward if we are to nurture and protect the people engaged in producing and using it. OSH practitioners have an important role to play in this process.
David Sharp is CEO of health and safety learning provider International Workplace. He is a Fellow of the Institute of Workplace and Facilities Management (IWFM), and a Technical Member of IOSH. He holds a Masters in AI Ethics and Society from the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.
For more information, see:
internationalworkplace.com
[email protected]
T. +44 (0)333 210 1995
linkedin.com/company/internationalworkplace/
OPINION

Abuse of shopworkers continues to rise – more needs to be done
By Paddy Lillis, Usdaw union on 06 June 2025
Over half of retail workers have been threatened by a customer in the past 12 months.

Managing the risks of digitalised working
By David Sharp, International Workplace on 04 June 2025
AI-powered systems, digitalisation and automation offer great potential for improving efficiency and driving innovation at work and in society, but digitalised work practices also pose a significant threat to employee wellbeing.

Drones: flying high or greater depths?
By Neil Blacklock, Rainham Industrial Services on 16 May 2025
Drones are often used to avoid the need for humans to work at height, but they can also be used in certain dangerous confined spaces, such as silos, sewers and tanks, reducing the risk to workers.