We know that AI is already reshaping workplaces, offering new ways to protect employees and presenting new risks driven by the rapid development and adoption of new technologies. That’s why workplace health, safety and wellbeing in an AI-enabled world was the focus of British Safety Council’s 15th Annual Conference, held virtually on 14 October 2025.
News
‘AI presents opportunities, but not without risks’ is the key takeaway from British Safety Council’s 15th Annual Conference
The day’s packed agenda was opened by British Safety Council Chief Executive, Mike Robinson, who discussed the evolving OSH landscape and what it means for the future of workplace safety. “AI continues to transform how we work, live and lead…” Robinson told attendees. He finished his remarks by assuring attendees that “AI will not replace the health and safety manager, but it will change the role. [OSH] professionals will be the why and the how, and AI will augment our expertise.”
The conference was sponsored by Driving for Better Business, an award-winning free programme from National Highways. Simon Turner, Engagement Manager at Driving for Better Business, presented a session on how AI is rewriting driver safety for the 21st century and beyond. Simon told attendees that “AI is no longer a concept for the future; it’s already reshaping how organisations react and perform.”
Further, Simon made the case for AI’s ability to create safer roads for drivers and pedestrians alike, utilising a wide range of AI-enabled technologies to “enable smarter decisions” and “empower safety managers to move from reactive firefighting to proactive control.”
AI in the Real World:
The day also put focus on sharing best practice and real-world applications with case studies being shared by several leading OSH voices, including Rob Bullen (Lead Solutions Consultant at Hands HQ), who reminded attendees of AI’s role as an analyst, one which “doesn’t replace human judgment but enhances foresight.” Echoing similar comments made by Mike Robinson earlier in the day.
International learnings:
This same session also saw an international case study from Nada Jasim (Director of Safety, Risk, Regulation and Planning), who told attendees about the groundbreaking work being undertaken by Dubai’s Road and Transport Authority (RTA).
Nada explained how RTA are building a culture where “data is not only recorded, but intelligently managed.” She noted, “We have moved from theory to practice in less than 10 years.”
The rapid advancement of AI and its adoption around the world have understandably been cited as a concern for employers and employees, who need the right knowledge and expertise to be able to navigate change well.
Nada shared the challenges and opportunities of implementing AI in a city which prides itself on rapid but sustainable growth. Nada listed several pilot indicators and early outcomes that have resulted in a 70% accuracy of detecting unsafe acts or conditions on the roads and a 90% reduction in repeated unsafe acts.
Ms Jasim echoed comments by other speakers throughout the day, acknowledging global concerns around role replacement and automation as a result of technological advancement, saying that, “AI is not just about technology, it is about how we use it to create safer, healthier, and more resilient workplaces.”

Keynote address:
The keynote address was delivered by Dr Bob Rajan OBE, Vice President of Safety Groups UK, who provided attendees with a comprehensive understanding of AI models in an OSH setting, which he said “offer vast amounts of possibilities” but stressed that this is only the case when used judiciously.
Dr Rajan reminded attendees that “human-led, meaningful control” is necessary to keep the workers of today and tomorrow safe from AI-related harms.
Understanding that AI offers meaningful opportunities for worker and workplace safety is hugely important, but we know that AI is not without its own risks, many of which come from a lack of values aligned with human betterment. This was a theme picked up by British Safety Council’s 2024 whitepaper, Navigating the future: Safer workplaces in the Age of AI, which considered the risks faced by vulnerable workers in Kenya who suffered significant psychological harm from teaching large language models (LLM’s) to spot and flag harmful online content.
Dr Rajan’s session also dove into the ethical implications of using AI, which must be built on a foundation of public trust and the values of liberal and democratic societies (such as fairness, equality, and non-discrimination).
Dr Rajan spoke about the need to be both competitive and safe with AI, acknowledging that this complicates policymaking and requires state and regulatory actors to be both forward-facing and realistic about the nature of change and the challenges it brings.
Tech-driven psychosocial safety:
A panel session kicked off the afternoon’s proceedings, chaired by Stephen Haynes (British Safety Council’s Director of Wellbeing).
It brought together leading voices from the British Standards Institute (BSI), the Institute of Employment Studies (IES), Croner, and Safety Groups UK to explore the opportunities that AI presents for addressing work-related stress.
Panellists drew on a wide range of professional experience to discuss the realities of AI in the workplace, understanding both its benefits and challenges. Data quality and inherent data biases were discussed as key challenges to AI adoption and integration into both new and existing workflows. This was underscored by Kate Field, from the British Standards Institute, who said, “The output of any AI is only as good as the data it's analysing.”
Chris Wagstaff (Croner) further developed this line of thinking, questioning the place of biases and whether AI will allow for these to be removed once and for all.
Broadly speaking, we know AI models are only as good as the data which feeds them, and as with life, biases in data sets can and do shape human responses, for better or worse. Understanding how to build out biases in development stages, before they become problematic further down the line, must be a growing focus for developers, given the legal and regulatory frameworks surrounding equality in the workplace. The consequences for getting this wrong could have significant legal and economic ramifications for employers and developers alike.
Trust in AI was a theme underscored by all panel participants, something drawn out earlier in the day by Dr Bob Rajan. If trust is undermined during development, these issues are likely to continue into AI usage in the workplace, risking making workplaces less psychologically safe. Consultation was seen as a key mitigation to this work-related risk, ensuring that communication and transparency are at the heart of work-related AI adoption.

Leadership in the Age of AI:
Sean Elson, Regulatory Partner at Pinsent Masons LLP, led an afternoon session on what leadership looks like in the age of AI.
Amid rapid change, we recognise the need for strong leadership to help employees navigate a world that might look and feel very different from the world they might be used to. How we do this can make the difference between weathering change well and weathering it poorly.
Elson spoke about the values needed to underpin the adoption and use of AI in the workplace, telling attendees that “Those [AI tools] need to be firmly anchored to some long-standing and well-established [leadership] principles.”
Elson also touched on the psychosocial elements of AI in the workplace, echoing the sentiments from earlier sessions.
The discussion cautioned against considering every development an ‘innovation’, understanding that many of the historic lessons coming out of the OSH sector will still govern our approach to OSH in the age of AI. This topic was similar to an earlier article by British Safety Council Chief Executive, Mike Robinson, which can be found here.
The session further explored the collateral benefits of AI and how these can be recognised by boards, C-suite leaders, and managers.
A further legal session was hosted by Chris Green, Partner at Ward Hadaway LLP, which looked at the legal implications of AI in health, safety and wellbeing.
Among other things, Green talked in depth about the accuracy and validity of AI-generated content, noting that a significant number of AI search results return fictitious or factually inaccurate content, which risks undoing decades of progress.
Upskilling and Transformation:
The final session of the day, facilitated by Steve Ward, British Safety Council’s IT Director, brought together leading academic and industry voices to discuss AI-powered upskilling and transformation.
Professor Adrian Hilton, of the Surrey Institute for People Centred AI, made the argument for AI as a tool that helps to “make better informed decisions, but not to make the decisions for us.”
Microsoft’s Hector Minto, spoke passionately about the power of technology to improve inclusion in the workplace. How can we use new and developing technologies, how can we use existing technologies, to improve equality and inclusion in the workplace? Minto shared how Microsoft’s tools are grounded in use cases that reduce barriers and increase accessibility.
Minto discussed the future of AI, in which agentic agents enter businesses and support colleagues in innumerable ways.
This session was completed by Professor Nazrul Islam, of the University of East London, who spoke of AI as a driver of inclusion and a “tool that supports people, not sidelines them.” He introduced many practical talking points around the nature and design of work. How do we ensure that all employees can benefit from AI-enabled education and training, particularly for those who may struggle to engage with written or visual media? Bitesize training and micro-learning, says Professor Islam, is a way that all workers can engage with the human aspects AI has to offer.
The session finished with an audience Q&A session that discussed everything from role replacement to AI in clinical settings.

Reflections:
As the day drew to a close, a shared understanding had emerged among speakers: that the future of work will be defined not by technology itself, but by how we choose to apply it. AI offers unprecedented opportunities to improve safety, efficiency, and inclusion across all sectors, but only if it is developed and deployed responsibly, with people at its heart.
Speakers throughout the day reminded us that while AI can help predict, prevent, and respond to risks more effectively than ever before, it cannot replace human judgment, empathy, or leadership.
The discussions on data quality, bias, and psychosocial wellbeing underscored the importance of building systems that reflect our shared values and uphold equality. Equally, the sessions on leadership and upskilling made clear that safe and successful adoption of AI depends on informed, confident, and adaptable leaders who are equipped to navigate change.
If one message resonated above all others, it was this: AI must serve to augment human expertise, not replace it. As workplaces evolve, our collective challenge will be to embed AI in ways that enhance and not endanger the wellbeing, dignity, and safety of every worker.
In this sense, our 15th Annual Conference did more than examine the role of AI in health and safety; it reaffirmed a principle that has always guided the British Safety Council’s work, that progress and protection must go hand in hand.
Matthew Winn is Public Affairs Manager at British Safety Council.
NEWS

‘AI presents opportunities, but not without risks’ is the key takeaway from British Safety Council’s 15th Annual Conference
By Matthew Winn, British Safety Council on 14 October 2025
We know that AI is already reshaping workplaces, offering new ways to protect employees and presenting new risks driven by the rapid development and adoption of new technologies. That’s why workplace health, safety and wellbeing in an AI-enabled world was the focus of British Safety Council’s 15th Annual Conference, held virtually on 14 October 2025.

Safety leader and research group launches mental health in construction consultation
By Belinda Liversedge on 03 October 2025
The Construction Leadership Council (CLC) is inviting views on the root causes it has identified lead to poor mental health in construction and how the industry can help eliminate or reduce them.

Protecting workers from asbestos harm, a crucial conversation at Labour Party Conference
By Matthew Winn, British Safety Council on 29 September 2025
British Safety Council was pleased to join campaign group, Asbestos Information CIC, at their ‘A National Asbestos Strategy’ fringe event at the Labour Party Conference in Liverpool.