As global regulations shift from voluntary ethics to enforceable safety laws, the era of the ‘black box’ is over. Pulkit Parikh, PhD., machine learning scientist at VelocityEHS, discusses with us why transparency, auditability, and human-in-the-loop design are the new foundations of worker protection.
Features
High-stakes AI: Why unreliable AI decisions are too risky for high-hazard industries
SM: Under the current global regulatory shift, many AI applications in high-hazard sectors are being flagged as ‘high-risk’. What should safety leaders be doing now to ensure their digital infrastructure meets these new transparency requirements?
Human-AI interaction in software developed for high-hazard industries has grown significantly to the point where the stakes of opaque/unexplained AI decisions are too high to accept. Workers’ lives may depend on understanding the rationales behind AI recommendations and making informed decisions.
Hence, safety leaders must prioritise transparency and explainability, along with inference quality. That requires an in-depth documentation of how your AI systems are built, who governs them, and other related aspects.
Safety leaders should also require every AI vendor they work with for the same standard: model cards, data provenance documentation, validation records, and clear human accountability at each decision point. Organisations should aim to build auditability into their infrastructure as a core engineering requirement.
Pulkit Parikh: "We must build a safety culture that actively rewards professionals for engaging critically with AI outputs rather than rubber-stamping them."
SM: We are moving from ‘AI ethics guidelines’ to ‘AI safety laws’. How do you see this impacting on the way your clients in oil and gas, transportation or mining, approach their digital transformation roadmaps?
The shift from voluntary guidelines to enforceable laws will significantly affect AI deployment decisions in high-hazard sectors. Once ethical questions around AI use take on a legal significance, you will see substantial changes in procurement, architecture decisions, and vendor relationships. Oil and gas companies may start to require that any AI tool interacting with safety-critical workflows meets explainability and auditability standards as a hard procurement criterion as opposed to a nice-to-have.
SM: VelocityAI is your new AI assistant. Can you tell us a bit about how it works and how it can help organisations where safety of workers is critical?
Ask Vēlo, our AI assistant, is powered by VelocityAI, the intelligence engine built by our multidisciplinary team of PhD ML scientists, certified ergonomists, and experienced safety professionals.
Ask Vēlo is validated on real-world EHS incidents, and designed to operate within the structured context of safety workflows rather than as a generic question-answering layer.
In practical terms, that means a safety professional completing an incident report can ue Ask Vēlo to determine if the incident involves a potential serious injury or fatality, identify the underlying hazard type, surface likely root causes, and generate corrective action recommendations. All of this can be done without leaving the VelocityEHS Accelerate® platform, or waiting for a senior expert to be available, making the reporting and investigation of incidents more accurate and more efficient.
This is especially valuable in high-hazard industries where the quality of an incident investigation directly determines whether the same event happens again six months later and causes a fatality or disability.
SM: ‘Garbage in, garbage out’ – i.e when AI systems produce inaccurate or biased results if trained on flawed data – is a fatal risk in safety. What are the environmental social and governance (ESG) implications when the data used to train safety models is incomplete or lacks the diversity of the actual workforce?
This is a crucial issue relating to fairness in AI and has yet to get adequate attention in my view. The quality of the training data, including its comprehensiveness and inclusiveness, greatly affects the behaviour of the model.
When you train an ESG model on incident data that underrepresents certain worker populations without taking steps to ensure fairness, the model is likely to make predictions (such as hazard assessments and risk scores) that are less accurate for those sets of workers.
Hence, a non-robust system trained on biased data, such as one that is calibrated to the average worker in the training set, not only discriminates against workers but also puts lives at risk.
SM: How can AI-enabled insights help organisations identify and mitigate psychosocial risks, such as fatigue or cognitive overload?
A multi-modal AI system can use a variety of cues to detect psychosocial risks such as fatigue and cognitive overload. Some examples are patterns in incident timing relative to shift length, anomalous error rates in task completion, and ergonomic stress indicators that accumulate across a population over time.
The AI system may also identify helpful cues that escape human minds. It can be fed diverse inputs ranging from incident records containing numerical and textual data to videos of workers performing routine tasks.
Workers’ lives depend on understanding the rationales behind AI recommendations and making informed decisions. Photograph: iStock
SM: Regulators are increasingly mandating a ‘human-in-the-loop’. How can we ensure that safety professionals remain the ultimate decision-makers rather than deferring blindly to an algorithm?
Human-in-the-loop should be a key design consideration. If it is done well, it can not only tick a regulatory compliance box but also deliver optimal value to the customers, especially at this stage in the evolution of AI technology.
At Velocity, we ensure all artifacts produced by Ask Vēlo are merely nominations or recommendations. The user is always empowered to override the AI suggestions and have the final say.
The fact of AI playing a strictly advisory role to human decision makers needs to be reinforced at the operational level as well, so that we can build a safety culture that actively rewards professionals for engaging critically with AI outputs rather than rubber-stamping them.
SM: Investors are looking for safety performance data. How far does the governance of AI systems provide the auditable trail needed to prove an organisation is meeting its ESG commitments?
AI governance may be becoming a foundational component of ESG transparency. The key is implementing governance frameworks that automate the logging of risk assessments, bias mitigation activities, and safety protocols as a matter of course.
That’s what generates a verifiable, auditable history of how an AI system is actually making decisions. It creates a transparent, interrogable asset out of what institutional investors currently view as a black box. Companies are already sitting on rich operational data from their safety platforms without recognising it as ESG evidence; the governance layer transforms that raw data into proof that safety commitments are grounded in measurable action rather than corporate intent.
SM: As safety tech becomes more regulated and complex, how must the role of the EHS (Environment, Health, and Safety) professional evolve? Do they need to become data governors as much as safety experts?
The short answer is yes, but a clarification about the meaning of ‘data governor’ in this context is warranted. It doesn’t mean EHS professionals need to retrain as data engineers. It only implies they need enough technical literacy to ask the right questions of the systems they’re relying on: Where did this model’s training data come from?
How was it validated? What are its known failure modes? These are governance questions, not engineering questions, and they sit within the professional responsibility of those whose decisions affect workplace safety.
To book a live demo with Ask Vēlo on the VelocityAI platform visit: www.ehs.com
Pulkit Parikh, PhD., is machine learning scientist at VelocityEHS
FEATURES
High-stakes AI: Why unreliable AI decisions are too risky for high-hazard industries
By Pulkit Parikh, VelocityEHS speaking to Safety Management on 16 March 2026
As global regulations shift from voluntary ethics to enforceable safety laws, the era of the ‘black box’ is over. Pulkit Parikh, PhD., machine learning scientist at VelocityEHS, discusses with us why transparency, auditability, and human-in-the-loop design are the new foundations of worker protection.
Safety as culture: how Amey’s Zero Code still leads the way
By Robert Doyle, Amey on 16 March 2026
Amey’s Zero Code behavioural framework encourages everyone on-site to take personal responsibility for health and safety, and the root of this approach is creating a psychologically safe environment where everyone has the confidence and authority to stop work and speak up if they see anything unsafe.
Safety and sustainability – a complex relationship?
By Adam Pope, Draeger Safety UK on 13 March 2026
Companies are increasingly striving to improve their sustainability performance while maintaining high standards of workplace safety, but there are a number of ways of balancing these demands – such as hiring rather than purchasing safety equipment, to minimise waste and conserve resources.