Skip to content
 

LIMITED TIME OFFER: 15% off all training with code SAVE15 at checkout

Featured

Dr Shaun Davis image

Dr Shaun Davis

Belron

Karl Simons, OBE image

Karl Simons, OBE

FYLD

Shaun: Hello and welcome to this edition of the British Safety Council podcast Health and Safety Uncut. I'm delighted to be joined today by Karl Simon's Chief Futurist and Co Founder of Field. The subject we'll be looking at today is the future of technology in the workplace. 

Karl is Co founder and Chief Futurist at the AI Technology organization Field and an advisor to the UK Government on matters of Occupational Safety and health. In addition, he's a lecturer at the University of Cambridge HBS Digital Strategy at Harvard, a non executive director from the main board of Water and Sanitation for the urban poor and Chairman of the editorial board of the SHPA. 

A Mechanical Electrical engineer by trade, his 35 years experience includes domestic and international work in the sectors of defence, oil and gas, highways, construction, railroads, energy and utilities. As well as experiencing complex technology transformation for major corporates and delivery of major physical infrastructure programmes. 

On a personal note, I've known Karl for a number of years and he's one of these incredibly unique individuals who manages to balance the academic with the practical. And so when I asked Karl to join us and he agreed, I was obviously delighted. Adding to all the accolades in 2020 was our award and an OBE for services to mental health policy. 

Well deserved in my view, and so I'm delighted that Karl's here.  

Karl:  Hey, great to be honest, that's the nicest things you've ever said to me in the last couple of decades.  

Shaun: We’re on a podcast, see. My first question is what is a chief futurist? 

Karl: Oh, yeah, If you find out, let me know. I think, you know, it was quite interesting. So for the last 4& 1/2 years now since I stepped down from the major corporate world, having spent so many decades, you know, for 37 years now, 38 years working, But the last four years have been pretty incredible because we've taken a startup organization through the stages of investment through building an artificial intelligence, but it was born in COVID. 

And actually seeing that evolve has been pretty incredible. I'm going to talk some more about that now as we go through. But in terms of the chief futurist side, I think when my boss and I spoke about doing the topic of artificial intelligence,  

Remember we were doing it, ChatGPT got excited for that to be about two years ago when it was released. So four or five years ago, starting to step into this role. I began as the chief customer officer and, you know, looking after brand marketing, sales, customer success, you know, deployments. 

And then as the businesses evolved and we're able to put people in posts to run these areas, stepping back and being able to be much more of a brand evangelist and actually doing a lot more speaking and talking about what is happening with artificial intelligence and also advising different politicians, Lords, governments, both home domestically and internationally, now in Canada has been pretty incredible. 

So I guess it reflects the ability to be able to show the practical application of risk management within business using artificial intelligence. Because there's a lot of academics saying what you can do and what you should do. And very few practitioners say, here's how it works, here's what we've done, and here's the outcomes you should see. 

Shaun: And what is your personal stories? What's the background? So obviously I said I've known you. I've known you for years and years and years. Well, I don't actually know kind of your, your personal story. Did safety find you? Did you go looking for it? What, what happened? Where did it? Where did it? 

Where did the two? Where did Karl and Safety meet?  

Karl: I think the journey into Occupational Safety and health world began following my time in the military. So I spent 13 years in the British Army as a commander in times of conflict and as a mechanical electrical engineer where the military trained me. 

And then as I was leaving and moving into the civilian world of work, it was a natural fit because the military is all about the preservation of life. And I wanted to continue that journey. So got trained up and then moved into the... began as a health and safety advisor back then and at the turn of the century, which was really interesting times in Occupational Safety and health. 

And then I mean, it all went from there really.  

Shaun: And, and when you look back at, at that evolution in terms of when you entered the, the, the safety world versus where we are now and with the advent of, of AI, what, what are the big momentous changes would you think you, would you say you've seen? 

Karl: So throughout my tenure, I guess, you know, in the last 25 years, we've really shifted the goal posts not just around injury prevention and preservation of life and prevention of harm, but also you mentioned earlier the work around mental health within Occupational Safety and health. 

You see about 15 years, 15 or so, 20 even beyond that, around 20 years ago. And when we started looking at this, I remember speaking at the time to executives and leaders saying you will never be able to get the levels of harm prevention you want until you deal with the culture within your organization and the ability to be able to have people open up and speak in a psychologically safe environment. 

The route to that is to deal with how they are presenting themselves and how they are feeling within their business. So mental health is, you know, the, the, the problem at the time is, you know, HR professionals at the time were really, they were the ones that were dealing with mental health within business. 

But it was always about, you know, somebody that had something wrong with them. The reality is we all have mental health and our mental fitness is really important, as is our physical fitness. So the ability to be able, and it affects diversity cultures, you know, I mean, it crosses boundaries of how people behave when the organization's not looking over the shoulder, they'll have to work on their own initiative. 

You know, we will they say and act and behave appropriate to one another within work. You get that right, you'll start to have a very different culture within your business. And that in turn, I found through tackling mental health alongside physical health within physical injury within business, the two together can't be separated. 

And if you get it right, you can have levels of harm prevention that are phenomenal because you have psychologically safe workplaces.  

Shaun: I, I remember you and I talking a long time ago, long before the kind of the, the kind of mainstream advent of AI conversations. And you were using AI then and, and virtual reality training. 

And so you were kind in my view, you were a super early adopter. Where are we now? What's happening now in that in that area and how? How are people catching up or not?  

Karl: Yeah, it's, it's phenomenal, right. So if we step back, you know, the first thing is what is artificial intelligence, right. 

And regardless, so I've done hundreds now of presentations in the last 4 & 1/2 years that I've been working directly for a technology company. Prior to that, you're right, I was doing a lot of virtual reality training and using tools to be able to look at risk. 

But purely artificial intelligence, you know, if you think about natural language processing, there's a lot of companies now that are chucking AI onto the end of their organization just so they can say I've got AI working within the business. But really these are language models or chat bots, and they've been around for a while now and they get more and more advanced, right? 

I'm sure you use chat GPT regularly like everybody else, right? The language models are going great. But if you think, if I turn it practical, you know, how many spoken languages do you think there are in the world, Shaun? Hazard a guess? 

Didn't think I'd be asking you the question?  

I didn't. I was, I'm trying to work it out based on the number of countries and based on kind of so-called... 

Karl: ... Just give me a number.  

Shaun: Couple of 100. 

Karl: Right. Most people in every audience that I speak to say between 250 to 500. 

he reality is it's a lot higher. There's over 7000 known spoken languages in the world right now and we all have Alexa in our home. It's by far the most widely used voice to voice program that's around right for a language model. How many languages do you think Alexa is in? 

I'll tell you, 9. So it talks about the infancy there of where natural language processing is now. ChatGPT got exciting about it two years ago when it was released already over 80 languages. That is so the speed of transition. So within my organization FYLD, we have vocal transcription mapping. 

So what that means is we analyse from a video, let's say a field worker. So we have 10s of thousands of field workers daily using the platform at the point of work. They actually because what to field workers hate, right? You mean when I was on the tools I hated technology because you're asking somebody that digs holes for a living to type in text and use lots of drop down boxes. So they resist digital technology. The reality is, imagine now they record a video, the press upload goes into the cloud and then the natural language processing transcripts what is said. 

And then it identifies keywords and maps at the hazards and controls and actually auto populates the report for them that they, but it doesn't replace the human, it aids them. And so it comes back down onto the device and verifies. So we can use natural language process in a way to be able to map to hazards and controls that allow auto population. 

Shaun: When I was preparing for our chat, one of the things I was thinking about was what the advent and the the, the growth and development of this area. But what are you seeing in terms of governance? Are you seeing, are more organizations having AI policies? Are they having AI strategies in the biggest sense of the word? 

Or is it obviously a hybrid? What are you seeing in that world?  

Karl: OK, so policies and strategies are evolving, right? You're starting to see that come into organizations. But if I just step back a second to natural language processing is the first step, right? 

Then you've got computer vision. So if you think that's what is said, what is seen and what is known. So what is said natural language processing, what is seen, computer vision. So we identify images and then what is known as predictive reasoning engines, what’s the machine’s ability to learn. 

So by that I mean in that video somebody takes the machine. Can I, we've been to 1.5 million jobs now, right? It's infrastructure with jobs on the street and the construction etcetera. So imagine now you've the we've got 3.7 million images within the database. 

So the machine has learnt and can identify maybe in scaffolding, cones, excavations, so therefore we’re able to auto populate that way. What is known is really interesting. So imagine a field worker goes to the point of work, records a video at the machine says, hey, Karl didn't say scaffolding, he didn't say work at height. 

I didn't see any scaffolding, but normally does that type of work. So therefore I'll populate that as well and push it across to him or at that GPS location. I was there last month with another team and there was a low voltage electricity cable. So I'll push that across to him as well to consider. So artificial intelligence is really helping these teams be able to provide through providing risk intelligence at the point of work. 

On the policy side, so we've got things like the AI, the AI Action plan has come out from within government. We've got so legislation will follow. And then you've got the AI pact that's coming out from Europe. So there's a lot of information coming and it's all based around ethics and governance. 

You're right on that. You know, I think what happens is legislation as we know when regulations follow industry and the industry follows what organizations are doing. And so it all comes from the fact now that the speed of organizations taking up artificial intelligence is phenomenal. 

You know, we, we've operated FYLD in 270 different organizations already and using the platform across 5 continents in the sectors of electricity, gas, water, highways, rail, oil and gas, facilities. So what you're seeing is a really rapid take up of new technology. But, not there's still risk averse organizations for fear because that does exist on occasion you've got people say, well, what's it going to mean if you think why that is? 

Take generative AI, right? All that means is for listeners is the creation of content. Now that the challenge is the algorithms face out into the external environment. So if imagine a field worker at the point of work turn around and saying asking ChatGPT or any other platform, give me the confined space controls for this area. 

You have no idea what's going to be created. So you need the algorithms facing into your organization, which is what we do. So it's drawn from your standards, your controls, your procedures so that whatever is created, therefore you have control over. That's really important. So AI needs to be controlled when you introduce it into your organization through the systems you adopt. 

Shaun: So in the interest of balance and in this area, I'm going to ask you a question on the other side of it then. So you said about risk intelligence and is there a unintended consequence of deskilling people or people over relying on it? So people thought, I don't need to go and do the look, see and stand and look and do a physical risk assessment as long as it can draw something through, that's fine until the machine doesn't know what it doesn't know or isn't seeing something. 

So is there a, Is there a risk in that element?  

Karl: No, actually it's the opposite. So what we find is there's two things, right? Let's say there's a hundred jobs programmed today, and therefore the first thing you need is how many jobs had a risk assessment undertaken, for example. 

Yeah, you're able to know that because they're using the platform. But it's not just if the, if the machine creates the content for them, we don't replace the human, we aid them. So if an individual just presses accept, yeah, because they're not, they're not into interrogating the data that's produced, then the machine doesn't learn. 

But also the individual, then we will know. So therefore we have artificial intelligence, we call it the AI coaching layer. We've got behavioural nudges that actually go through saying hey, you didn't, you didn't interface with this. So therefore user interface is as important as usage. 

So therefore the machine says, Are you sure all these hazards and controls are correct? And then the individual will interface with it and say no, OK, that was wrong, that was right. So they're training the model.  

Shaun: All right, good. So what? So what's the the sweet spot between risk and opportunity then for employers who are are looking to introduce more technology AI into their business? 

Karl: Yeah, the the first thing is reach out to to a smart technology consultant or provider and have the conversation. I'm often asked by organizations, can you come and speak to us? And then we have a general conversation in a safe environment where I'll present artificial intelligence to them. 

And I've done that and I do it like every week at the minute, a couple of times a week presenting to different organizations. And that's not just big companies, small organizations, whether it's academics, politicians, I mean both sides of the House. It's government representatives. 

It's, it's incredible once people reach out and then they go right, OK, explain how it all works, but then we can show them practical application of it within infrastructure, within different sectors, etcetera.  

Shaun: So there's a part back to your role as a chief futurist. There's a there's a day job; delivery, and then there's a advocate ambassador educational piece as well, I think is what I mean you say, right. 

Karl: Yeah. And, and to be fair, I've, I've tried to do that throughout my career. You know, 15 years ago, lecturing on mental health within business was no different, right? We did a lot of work. And in those early days, you know, you were involved in some of that, you know, in the early days of trying to be... advocacy around, you know, this is what's important within organizations when you're looking at mental health within business. 

It's no different now. I mean technology is advancing in a rapid pace. It's not going into AI isn't a fad, right? You know what I mean? So it's not going to go anywhere very soon. They're not suddenly going to stop it. The gene is out there all. So the best thing companies can do is embrace it through getting an understanding of it. 

There is a digital literacy and upskill in programme that is required. You know, I did it for the Canadian government Premier four team in Ontario for the late the minister of Infrastructure, Labour, Energy and went through a whole programme of presenting and helping them. 

You know, it's no different for organizations in the UK. Many are reaching out and actually getting people in to present and speak to them about this topic.  

Shaun: So when we did your introduction, you talked about psychological safety and, and, and you've mentioned mental health quite a few times. 

And I know it's a subject really close to your, your heart. So to listeners, how do employers ensure that their employees feel safe as more AI and technology is being introduced or considered?  

Karl: Yeah. So the the first thing is you've got to have that digital literacy program. 

You know, it's, it's funny, like we don't many of the new Gen, next generation coming into business right now are digital natives. Yeah, they've grown up with technology and therefore they're not risk averse to it. It's just embraced and adopted by them as part of their DNA. 

The reality is that for many of us, the analog generation, we are the last of the analog generation at our age, right? But what I find is that when you do the digital literacy programs and you start to look at education, when we deploy and roll out education on artificial intelligence through the companies we work with, you'll be amazed at how quickly the, the older generation actually adopt the technology. 

And if you think of the reasons why that is, we did recently a, the HSE Health and Safety Executive in the UK commissioned FYLD to do a barriers to AI adoption research program with them. So we worked with a organization in the, the highway sector. 

We surveyed a group of individuals and the barriers to, ‘we're going to give you artificial intelligence to help you do your job’. And the resistance was there immediately. You could see high levels of resistance. ‘No, we don't want it’. Once we then rolled out the program and educated them and got them using the technology. 

The reality is 95% of them adopted it and said don't take it away. And now we that organization has spread us across their company. The reality is that why is that possible? If you think about it, auto population, like I said earlier, for infrastructure workers, they're just video recording. 

So therefore, you mean the easiest thing in the world to say, do you know what you're doing on your job? Yes. OK, We wanted to talk about it. They've never done it before though. We think we write, we think we type. We never actually get out with the at the point of work and talk about the job. 

Never done it before. So this is new, but it's self educating. And also, we say, then you leave. The artificial intelligence will populate the report for you. You just need to tell it what you're doing today. Yeah. And it identifies what's said, what's seen and what's known. So that means the experienced worker will be able to speak much better about the job than a new worker, even though the new worker will be better at technology. 

Shaun: So there's some kind of enablers then through the simplicity element of, of, as you said, not having to use drop down boxes and, and having that, that upload technology. We talk about barriers. What, what, what barriers do you find that people? Do people think it's invasive? Do people think it's a intrusion on privacy? Is it suspicion? And, and how, how do you work with organizations to kind of educate and inform them on that?  

Karl: Yeah, so naturally humans are resistance around the Big Brother effect of maybe so fix camera technology, let's say on sites is a challenge because people don't want to be watched all the time in everything they're doing. 

Yeah, so but if you give them what we use mobile phones and tablets, therefore they're the right side of the camera. So it's very easy. So they're able to it's not just safety, it's productivity, quality inspection and testing. The reality is if you get artificial intelligence is now enabling auto population, which means digital technology will be embraced by individuals. 

Therefore, you're able to get remote management intervention because imagine now the individual presses accept on the video that they've done and then the report that's been produced, a push notification goes directly to the remote manager that says, hey, you mean one of your six gangs has just started work and uploaded and then you can watch it. 

‘Oh, do I need to go and see them today? No, I'll go and see the one where that over here, I've watched where there's an increased risk level’. Technology is enabling us to reduce time, windshield time in vehicles is enabling us to be able to capture assets and tech and feed that back into work schedule in asset management programs. 

It is changing the way quality and productivity are affecting the field worker. Do you know on average a field workers day, 30% of it is standing time and the reality is the parked cars in the way, the permits not in place, the concrete trucks, the client hasn't opened the gate  

No matter what it is, normally the field worker will phone up the remote managers say hey boss, the equipment's not alright, they'll get on to the supplier. The supplier will come there. Thanks very much. I'll continue. Now that's never captured within our platform. We allow the field worker to say hey, I couldn't do 6 connections today because the pump never turned up, but I told you I captured it. 

Shaun: Yeah, so. I suppose the other thing you're thinking about there, there's a, a sustainability and environmental impact benefit in terms of reduced travel. I've been able to and, and the greater collaboration and oversight because if you're a, a remote manager and you want to, you want to check in on four or five squads, teams of people working, you can't be, you can't be everywhere all the time. 

But you've got much more accessibility if you can kind of see where they've got with it, what they've been doing and kind of check in, not check up necessarily, but check in on where they are, right. So there must be a an environmental sustainability benefit as well.  

Karl: Yeah, if you, if, if you think at its core technology is now enabling visibility at the point of work to allow improved decision making remotely. 

So there's a massive sustainability impact on how you harvest information because the unstructured data that is captured at the point of work by the field worker enables remote management to be able to then have structured conversations around, well, how's the, how's the, how are we delivering our services and our work at the minute for the customers. 

But yeah, you reduce windshield time is an easy one. So reduction in paper, all these things are happening, but the field workers are adopting it. That's the biggest change. Technology is being adopted.  

Shaun: Brilliant. So the British Safety Council undertook a survey last year with YouGov that was looking at technology and the future of work. 

And I'll, I'll read a couple of extracts here. So 63% of 2000 employers and 41% of 2000 employees report optimism about the impacts of AI on their workforce and workplace. And another one is 6% of employers foresaw that over 50% of their workforce could be replaced by 2034 and 19% of employees foresaw no rule replacement at all by 2034. 

So either end of the spectrum there. So what does that chime with what you're seeing? Is it at odds with what you're seeing or something else?  

Karl: So I think yeah, optimism wise that's quite interesting. In terms of those numbers, 61-63% of employers and 41% of employees you mentioned, I would, I would have thought it would have been higher than that. 

But I do understand that obviously when it comes to employees it's maybe low because it's this when did you say this was done 2024, right. So that was the early days. Look how far we've come in the last year with people using artificial intelligence just through chat bots and language models. 

So if you run this survey again, the chances are that maybe a lot higher as people have started to understand artificial intelligence.  

Shaun: So it's moved. So we're like we just a year. Well, I was 13/14 months on and it's moving as quickly as that, right?  

Karl: Massively, massively if you think about it living great example is do you use ChatGPT? Yes right. Did you use it 18 months ago?  

Shaun: I've never heard of 18 months ago, yeah.  

Karl: So if you think about in the last 12 months, you're using our artificial intelligence to write stuff for you. Yeah. And values you. What what's what I'm seeing is we, we coined the term Google, right? 

Google it. Yeah. Yeah, we didn't. I mean, we used to say before that ask Jeeves, you know, So now what's the next thing? Now we're starting to ask chat, you know, with we're asking ChatGPT everything as opposed to go into the Google. You know, that sole transition is happening at a very fast rate now and it's quickening and it's moving quicker. 

So, yeah, I mean the other one you mentioned there, I mean when you start to think about 26% of employees, a belief that AI would make it less safe. I mean, I think the reality is that artificial intelligence is going to increase the levels of harm prevention within the workplace. 

And seeing that already, because it is aiding and supporting, certainly in the the world I live in, in terms of field workers, it would in dangerous environments out on the roads, out on the public highway working, you know, in the street works. The reality is we're seeing much better levels of harm prevention in the first year of using our technology. 

We're averaging 50- up to a 50% reduction in injuries and incidents in both the sector companies in highways organisations is phenomenal. Even in mines, mining was recently in in the first six months of using the technology, 50% reduction in in harm levels. 

Karl: So you're, you're obviously really close to this with the work that you do. So what, what if we, if we Fast forward another 12 months, bear in mind me to that quickly. It's moving. What else is coming down there? What else is coming down the track for us?  

Karl: So I think, you know, when we look at the likes of, you know, robotics is an, an emerging field that is very quickly going to take hold. 

Now, and this is both this is where the worry comes in around the ethics of artificial intelligence use. At the minute, we're using systems that allow us to be able to create content and advise humans who are doing work. 

The reality is, once we start to shift towards robots that are faster, stronger, don't need brakes. Therefore, you they're already in Amazon, right? In the, in the workplaces. All, all be it in its infancy. But if you think about, you know, my father worked in Fords for 25/30 years of his life. 

And I remember the times when he came home back in when I was a child. And he's saying what they're replacing us in work with robots. And now what happened in the manufacturing Center for car production is you see the robot arms that creating, right? And he did lose his job as a result of it. So and then he had to go and change the way he worked. 

The reality is this is going to increase. The bit I worry about is the weaponization of robotics, and that's where it starts to get very scary in terms of it when you weaponize. I saw a dog recently, a robot dog. You may have seen them. 

I saw that fitted with cameras and guns. So you can imagine a fleet of those, in Modern Warfare. So it's that's where it starts to become very scary, I think.  

Shaun: Interesting. OK, so each episode I like to ask our guests what 1 take away they'd like the listeners to have following our conversation so you know, when they turn up to work tomorrow, next week, what, what one thing would you like them to think or do differently as a consequence of our conversation today? 

Karl: So I would say if you haven't tried artificial intelligence, then go ahead and it's, it's less scary than you think. Have a try. Umm, the second thing is if you're in an organization and you're thinking right, how can we look to use artificial intelligence to help us deal with efficiency within our business and get better, then by all means, reach out, speak to somebody and you will actually find it quite surprising. 

It's, but get somebody who can explain it in simplicity terms to you in terms of how it could benefit your organization.  

Shaun: Amazing. And if people want to carry on this conversation with you and field, how can they get in touch with you? What's the best way to follow this up? 

Karl: Yeah, socials is always best. So by all means reach out on LinkedIn. I'm available and I do read everything that comes through so.  

Shaun: Fantastic. Well, Karl, thank you so much for your time. I really appreciate it. I really, I appreciate the insight. There's a lot there that you covered that's been just super interesting and informative and I wish you and the team at FYLD all the, you know, every success in the future. 

So thank you again. Brilliant.  

Karl: Thanks, Shaun. Appreciate you having me on. 

 

Links will be in the episode description.