The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, cloud, intelligent automation, data analytics, agentic AI, and digital transformation.
He has authored three influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When I'm not chairing international conferences, serving as a fractional CTO or Chief AI Officer, I’m delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
My team and I help global businesses drive AI, agentic ai, digital transformation and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹𝗧𝗼𝗽 𝟱𝟬 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗼𝗻 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝟮𝟬𝟮𝟱
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
From hype to habits: making AI safe, useful, and fair at work
Everyone wants the upside of AI; few talk plainly about the downside. We open the box and lay out the real risks HR and L&D teams face—data security, access control, vendor exposure, algorithmic bias, and the slippery problem of hallucinations—then build the guardrails that let you ship with confidence. Governance isn’t a brake pedal; it’s the seatbelt that keeps your organisation moving fast without flying through the windscreen.
TL;DR:
- AI governance as enablement, not bureaucracy
- Vendor risk, GDPR and EU AI Act awareness
- Identifying and reducing algorithmic bias in hiring
- HR and L&D as ethical gatekeepers and trainers
- Risk registers, documentation, and stakeholder alignment
- Incident readiness: response plans and communication
- Skills building through structured training and repeatable patterns
We start with the essentials: mapping data flows, enforcing least privilege, pressure-testing vendors, and keeping GDPR and the EU AI Act firmly in view. From there, we tackle bias with concrete steps—fairness metrics, pre-deployment testing, debiasing techniques, and human-in-the-loop controls—anchored by a candid look at high-profile failures and what they teach us. Hallucinations get the scrutiny they deserve as we turn critical thinking into a repeatable practice: tighter prompts, grounded answers, and validation workflows that prevent confident nonsense from slipping into policy or hiring decisions.
Throughout, we position HR and L&D as ethical gatekeepers and capability builders, the people best placed to train models responsibly and teach the business to use them well. That means a living risk register, clear roles, practical training, and a tested incident plan—because resilience is won on quiet days, not crisis days. If you want AI that is safe, fair, and actually useful, this conversation gives you the blueprint and the language to lead.
If this resonated, follow the show, share it with a colleague who’s wrestling with AI adoption, and leave a review telling us the one risk you want help tackling next.
Exciting New HI for HR and L&D Professionals Course:
Ready to move beyond theory and develop practical AI skills for your HR or L&D role? We're excited to announce our upcoming two-day workshop specifically designed for HR and L&D professionals who want to confidently lead AI implementation in their organizations.
Join us in November at the beautiful MCS Group offices in Belfast for hands-on learning that will transform how you approach AI strategy.
Check here details on how to register for this limited-capacity event - https://kierangilmurray.com/hrevent/ or chat https://calendly.com/kierangilmurray/hrldai-leadership-and-development
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
Claire, everyone talks excitedly about introducing AI to their business, but let's be transparent here. At the beginning, when you don't know exactly what you're doing, there are real risks to introducing AI in a business. What are you seeing?
Claire Nutt:I think there's so much excitement around AI and all these wonderful things it can bring, and the time that it saves and the efficiencies. And organizations need to be clear, this is the exact same as introducing any new technology into a business, and there are risks. When you think about introducing new technology, for example, and you're looking at the data that it's processing, you're immediately considering the privacy contents in that piece of software. How secure is it? Who has access to it? What are the different levels of access? And what involvement in your infrastructure is there? So I think when it comes to AI, it's so critically important not to overlook that. That is still the introduction of a new piece of technology and process into an organization. And they need to ensure that the right governance and risk assessment has been done around that. And when we come to AI, that risk, it's not expediated, but the opportunity for choice is vast. You have the opportunity to create your own. You might use open source to develop individual processes or sections within your business that use those, or you might decide to buy. And it doesn't mean that if you decide to buy a shiny product from a well-known company that it's not without its risks either. We have heard some stories recently about some big tech companies that have been hacked, you know, when it comes to AI as well, and the introduction deduction, for example, of malicious code when it comes to the likes of open source products, too. Nothing is fail-safe. But the good news is that organizations that put the right guardrails in place have good governance around what AI they're using, and have a clear understanding about where the risk is and that that risk is mitigated internally, have a very high success rate with the introduction of that technology into the business. For me, for HR and LD, it will create a little bit of nervousness. It will, they are risk averse as a profession. That's just going to happen. But as I say, where you do have the right guardrails, where you have given a considered approach to what you're using, where you are thinking about GDPR and privacy and who's processing the data, there is a high success rate with any of that introduction. What's your thoughts? What would your be your biggest risk, do you think, Kieran?
Kieran Gilmurray:Ah, there's loads. Do you know what I mean? When you don't know what you're doing, because everybody makes mistakes. You know, it's the same thing when you're driving, you get in the car for the first time, you do all the things wrong. Once you know how to drive, once you've had a bit of practice, it's fine. I think people see the word governance, Claire, and they panic. You know, they they consider it the brakes of the organization. Whereas to me, done right, done sensibly, you know, it protects you. You you talked about data security. We need to make sure the data is secure and that folks inside of the organization don't have more access to things that they need, and particularly the information that may be at risk from outside of the organization as well. You know, call it cyber attacks, or just you know, leaving very uh obvious passwords like password one, two, three, four in place, exposes you to you know reputational risk and probably fines because GDPR is still available, uh, data privacy is still an issue. If you're in the anywhere dealing with you know Irish companies or European companies, the EU Act very much applies. AI is not a Harry Potter magic wand that waves all of that away. But you can do all this security securely once done right. And appropriate governance, and I use that word very deliberately, keeps the data right, provides the tram tracks and the rules that everyone actually needs to give them the guidance to use this securely. Now, you and I will teach people how to do this in a course that we're delivering later on the year, November. So it's it's it's not as big a challenge as can be. But if we do go back to the AI companies, you know, it's it's not just about you protecting the data, but it's about using AI platforms and products essentially, reliably, with confidence, with skill to make sure that even though you've got good data, that you're not introducing decisions based on a on a biased data set. So let me explain that when I look at some of those, and we'll not name them until the court cases actually happen. Some very big companies that are selling HR and L and D solutions to allow business teams of scale to make decisions. They're currently being taken to court at the moment because the AI algorithms, in other words, the code that's being used to determine who should be employed or to assess candidates who've been for interview is based on historically biased data. Now, let's take the classic example of Amazon. Remember back in 2008, and credit Amazon, by the way, they fixed this when they realized they gave a group of software engineers the responsibility to go out and hire the next generation of exceptional software engineers. Unconsciously, in other words, not deliberately, they put in the data from the existing most successful employees, and lo and behold, their words not mine, you know, white male, pale steel, Californians who went to MIT and whatever else who proved successful in the role, were of course the only group that they really hired from. So females, people of different genders, colors, races, they weren't in the data set. So when the algorithm went and learned as to what good looked like, it decided and therefore perpetuated the existing biases that exist in the data. Now, all this can become accounted for if you're aware of what you're actually doing, if you're aware of what the risks are, and you and I can teach people that. But AI, Claire, it's not just bias and data, it's not just data security. AI comes with its own risk. So you you and I are familiar with the term hallucination. Now, that's not something you and I are doing by taking psychedelic drugs and parting all the time. What is hallucinations in AI and what should HR and L and D professionals do about them?
Claire Nutt:AI is designed to please us, and in its simplest forms, hallucinations in it in AI are where it lies. It makes up a response or it makes up an answer, and it's not the truth, and it's not factual, and it's not founded on the basis of hard data or information. And this is why I think HR and LD professionals are some of the best people to train these models. We are taught as professionals about unconscious bias and awareness of that bias very early on in our careers. So they are some of the people in the organization that are most critical about their own decisions and the things that they say and the responses that they give. So that's why I feel in my eyes they are some of the best people in the organization that can actually input into how AI is used responsibly and ethically and unbiasedly across an organization. But hallucinations can be quite when it comes to AI products, specifically when they're in their infancy as well, because um you have to remember it's been trained on a set of information or a set of data. If that data set's very limited, for example, in the case that you give with Amazon, where it was very focused on a particular group or background, you have to then feed it more information and give it more instruction around what that bias is. Now, the good thing is you can actually use AI to train bias out of a model of AI as well. So AI is also the solution to that problem. But the critical skills that we need to teach our people is one, how do you frame your questions or the information that you feed it? And two, how do you recognize that this potentially might be a biased response? And that's again back to critical analysis, another key skill that HR and LD professionals are taught again very earlier on in their career, because it's something that they have to do. They then dissipate that skill across an organization through their training and learning and development plans and new manager training, for example, all the other touch points that they have throughout the company. This is why it's such critically, so critically important that HR and L and D understand that and how that AI model works so that they can then mitigate that risk or reduce it, and then basically build on that success as they move through an organization too. But yeah, some of the hallucinations can be quite quite grandeur.
Kieran Gilmurray:You're you're so you're saying when I ask ChatGPT, am I wonderful? And it says Kieran, you're you're wonderful, it may not be telling the right.
Claire Nutt:Might not be true. No.
Kieran Gilmurray:Might not be true. Oh my God. Uh all faith. But that's the important bit, though, isn't it? That uh again, those L and D and HR professionals who are worrying about AI replacing their jobs, actually, this is a tool that really will augment them. But not just them, when they learn these skills, you know, how to account for hallucinations, how to protect the data, how to put appropriate governance and rules and ethics and responsible AI policies in place, then that's a multiplier effect for me, because one or two HR professionals setting those guidelines can protect every other member of the workforce to make sure that they're not hallucinating, not perpetuating bias, and ultimately not putting their reputation at risk and uh open to fines as well when they know what they're doing. So, Claire, beyond uh you know, understanding the data, putting a policy in place to guard against, you know, teaching people how to prompt or ask the right questions, is there anything else we've forgotten, any other tips or tricks or quick steps that we can help HR and L and D teams to take to manage any remaining AI risks that we we may or may not be talking about or have seen?
Claire Nutt:The risks, I think HR and L and D are the gatekeepers of the organization. That's how I see them when it comes to that compliance, when it comes to those processes and those policies. Policies are great, they give instruction, but I would also say document everything. So have your risk register and have that open conversation across the organization with other stakeholders. This isn't a siloed singular point of development, this will be something that is business-wide. So it's really critically important that you are engaging with other stakeholders across the organization that are going to be involved in that process. And when you've had the conversation about, okay, well, here is the risk we've potentially identified with this approach, but here's the mitigation. It means you don't have a single point of failure either. You have that documented, you've taken advice from advisors who've maybe been through that process before, you've been on the appropriate training that explains what are the risks, what do you need to look for, how do you mitigate those in a business, and then you have that appropriately documented so your employees and your organization knows the direction of the business, but also knows what it's doing with AI so that it can help the organisation, but also protect it at the same time. And the great thing is we have training coming up, obviously, as well, in November, that's going to help develop those skills and give a deeper level of understanding because H R and L and D practitioners, they're not the IT specialists in an organization, but they're very quickly becoming the AI particular specialists in the organization so they can strategically advise. And we'll delve deeper then into the understanding behind why we mitigate against that risk, how we can do it with our approaches, and what strategic business decisions need to be made with our influence as well. So I'm very excited about that.
Kieran Gilmurray:Yeah, we'll have fun, and people will have fun on that course as well. And I think two parts there, just to close those uh the final comments. Definitely having that risk register is absolutely key. Uh, but again, learning lessons from it, you know, and that and feeding those lessons back into the business is essential. But you and I, through our training, are going to teach people how not to end up in a situation where the horse is bolted, the barn doors are open, and now we're fighting to work out how do we actually get the stallion back into the corral, never mind the barn. And I think that bit at the top end where we teach people how to, you know, responsibly introduce AI into the HR and L and D functions, into those teams, and then distribute that learning throughout the organization is absolutely key. So know how to put the Lego bricks in place to build the fire engine and the hospital and everything else you want to do is essential. But if something does go wrong, and let's be honest, anybody who thinks they haven't been hacked, they're a bit like me, wondering why ChatGPT is saying I'm absolutely amazing. There may be the odd hallucination in there. If you haven't been, you will be. That's not a hope in any shape, form, or other. It's just it's so prevalent, it's unreal. So knowing how to cope once your data's made it into the public is absolutely essential. Because if you don't fix what's happened pretty quickly, then your business reputation and everything else could be in trouble in very quick step. But I'm looking forward to training everyone in November to sensibly uh explain what is AI, what is AI governance, how to cope when things go wrong, but how to plan to make sure things go right, Claire.