The Digital Transformation Playbook

Practical AI Governance For HR

Kieran Gilmurray

AI is already inside your organisation, whether leadership has a plan or not. We unpack how HR and L&D can turn quiet workarounds into safe, transparent practice by pairing thoughtful governance with practical training. 

From the dangers of Shadow AI to the nuance of enterprise copilots, we share a clear, humane path that protects people while unlocking real productivity gains.

TLDR / At a Glance:

• duty of care for AI adoption in HR and L&D
• why blanket bans fail and fuel Shadow AI
• understanding data flows, privacy, and GDPR
• identifying and mitigating bias in models and outputs
• transparency, disclosure, and human oversight for decisions
• culture change to reward openness not secrecy
• choosing enterprise tools and setting guardrails

We dig into bias with concrete examples and current legal cases, showing how historical data and cultural blind spots distort outcomes in recruitment and learning. 

Rather than treating AI as a black box, we explain how to map data flows, set boundaries for sensitive information, and publish plain-language guidance that staff can actually follow. 

You’ll hear why disclosure must be rewarded, how managers can credit judgment as well as output, and what it takes to create a culture where people feel safe to say “AI helped here.”

Hallucinations and overconfidence get their own spotlight. We outline simple verification habits - ask for sources, cross-check claims, and consult a human for consequential decisions - so teams stop mistaking fluent text for facts. 

We also clarify the difference between public tools and enterprise deployments, highlight GDPR and subject access exposure, and show how small process changes prevent large penalties. 

The result is a compact playbook: acceptable use policy, clear guardrails, training on prompting and bias, periodic audits, and a commitment to job enrichment rather than workload creep.

If you’re ready to move beyond fear and bans, this conversation offers the structure and language you can use tomorrow. Subscribe, share with a colleague in HR or L&D, and leave a review with your biggest AI governance question-we’ll tackle it in a future show.

Exciting New HI for HR and L&D Professionals Course:

Ready to move beyond theory and develop practical AI skills for your HR or L&D role? We're excited to announce our upcoming two-day workshop specifically designed for HR and L&D professionals who want to confidently lead AI implementation in their organizations. 

Join us in November at the beautiful MCS Group offices in Belfast for hands-on learning that will transform how you approach AI strategy. 

Check here details on how to register for this limited-capacity event - https://kierangilmurray.com/hrevent/ or chat https://calendly.com/kierangilmurray/hrldai-leadership-and-development

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Kieran Gilmurray:

Claire, everybody talks about the upsides of AI, but very few like to talk about the downsides, the bias, the GDPR issues, hallucinations, employee trust. But HR and L and D professionals have a duty of care when introducing AI. So why do more HR and L and D professionals not introduce governance and treat it as a career saver for HR and LD professionals and their staff?

Claire Nutt:

I think it comes back to knowing what to introduce. So understanding the the basics of AI and what it's doing. As a HR professional, you can't introduce an AI policy if you don't understand what data it's processing, how employees should interact with it, what information they should share, what the potential outputs of that's going to be, and then what the impact on data protection or GDPR could be for your organization. And I've seen organizations take a variety of different approaches to this. I would say for those that I have heard of that have taken a blanket ban on AI, I would urge them to reconsider that approach. We know, and you and I have talked about Shadow AI in the past, if an employee is presented with a tune that's going to make them smarter, faster, more effective, more efficient at their job, do organisations really think that they're not going to use it under the radar? And that is so dangerous for organizations when they haven't trained their employees how to prompt, haven't educated them about what information they're allowed to actually put into the system, or haven't shown them what it's actually doing in the background. HR and L and D have a responsibility to their employees to explain how information is being processed in the organisation. They need to be able to stand over what information is being shared with who, where it's stored, and what privacy notices or associated GDPR conditions are placed around it. So it is imperative that HR and L and D professionals are making sure if they're introducing HR, which I hope or AI, which I hope they are, that they're putting the appropriate guardrails around that. But I still think it comes back to a lack of understanding. They don't know what to put into the likes of those policies because they do not fully understand the technology. And hopefully that's what you and I are here to do, to start to educate people around what those emerging technologies and you know artificial intelligence programs actually do so that they can have a deeper level of understanding and then actually educate those internally and influence those at a strategic level to make the appropriate decisions. There's a whole host of things that AI also brings, but maybe if we we chat about bias first of all, Kieran, I mean, what do you think is the biggest challenge for HR and L and D professionals when it comes to bias in AI?

Kieran Gilmurray:

Yeah, uh, I say gorgeously apart from ourselves. Yeah, it's interesting because I sort of lean into a question before that, Claire. We're actually putting in a little bit too much pressure or expectation on both groups because they're not lawyers, they're they're not uh IT technicians, they're not GDPR specialists. Uh and so understanding all of that is quite a challenge, which for me is bias, which is AI models built on data that doesn't necessarily reflect the culture in which people operate. So, for example, uh at this moment I'm in Riyadh, I would suggest to a large degree that open AI is not necessarily reflective of Arabic culture because of the way it's been built, you know, and unintentionally, so sort of say when you're born in California, your word worldview is in California, when you grab data sets in English, you know, it is going to lead to a natural, uh and I mean not saying it's the right thing, but a natural bias toward that type of coding and output and data and everything else. So, what I'm more worried about from a bias culture is that we feed databases with data and then we make judgment calls on them. So, let's take an example a couple of years ago, where Amazon, it's a little while ago, into Amazon's credit, they recognized the risk, they asked their software engineers to identify using AI who would be a successful candidate. And of course, they used existing data that was in the existing HR system as their data set of record, put AI on top, AI read the data and went, oh, look, lots of males, very few females, others are they or non-binary. Uh, and they went, oh, look, and they come from these universities and they have this particular you know background and and and and guess what? For a long while, when there was very few uh females actually in STEM subjects or being recruited in them, it was a bias toward males. Now, in fairness, DAMALSIN they recognize this, and this is the joy of AI conversations around bias, and then we'll come to hallucinations and data privacy and GDPR in a second. It's raised it above the parapet for one of the first times. AI itself isn't necessarily biased. Unconsciously, the developers may be, the data may be, but AI is a reflection of ourselves. And Claire, I might suggest we maybe don't like what we're actually seeing.

Claire Nutt:

Yeah, and I think this is the push-pull position that HR and LD professionals find themselves in. They are the best gatekeepers in the organization when it comes to data and when it comes to an awareness of their own bias and bias decision making. If anyone is best placed to educate employees and across the organization wide, how to use AI and to sense check for bias in those responses, it is those those practitioners. You know, they they should be the ones that are leading the conversations around that, too. And even your example of Amazon, you know, that did happen some years ago, but there's a very active legal case for very well-known HR software provider in America at the minute, where there are open cases of bias and discrimination when it's come to recruitment decisions, where AI has been involved in that decision. So I think transparency is a critical layer that HR needs to continue to apply when it comes to AI, if they are considering using that in a decision-making process, that they are transparent and clear about what it's doing, how it's been done, and what human has checked that output. Um and I think that brings a degree of confidence then across the organisation that it can be used responsibly and it can be used ethically. But you then touched on or should be. Yeah, that's what we would hope for. Um, but this is the beauty of HR. You know, legislation or or rules can be black and white, but people are grey. And this is where HR and LD come in to form the culture around that behaviour that is expected at an organizational level. If we're introducing AI, this is our policy, this is how we expect you to use it, and we expect you to use it for good, but we also expect you to disclose when you've used it as well. And I think there's this little bit of secrecy around it where employees think, oh, I've used AI and it's maybe given me a better response than I could give as a human. So does that make me look bad in my role? Absolutely not. Do you not think your employer would be happy that you're refining your responses, that you're becoming more knowledgeable? Employees can't search for what they don't know.

Kieran Gilmurray:

So if AI there is a risk to that though, because I I wrote an article a while ago called Hidden Cyborgs. And what I was saying there, Claire, is they their employer should like the fact they've come up with a better answer. But some employers are saying, Well, was it you or was it AI did that? So and again, immediately the doubt. Then when they look for promotion, they're going, Well, was it you or AI that really delivered those results? True stories, you know. So again, uh and the third one I've seen as well, it's okay, now you can do that. Why don't we give you more work and we'll not hire the replacement we were going to do? So rather than job enrichment, where they should be giving them the tools, should be encouraging their use, uh, they should be transparent about it. They find that they can't be. Yeah, and therefore it comes back to your original risk, which is I'm actually going to use the tool because it's so useful, but now I'm not going to say, which means, regardless of bias or hallucinations, which is where AI or generative AI invents things, now you just don't know that your organization is being put at risk by your employees who are uh using, and you said it's shadow AI to actually get the job done, but not say that's kind of scary.

Claire Nutt:

It is scary, and and it comes back to you know, what guardrails are employers putting around that use? What have they considered? I mean, does the employee know the difference between co-pilot and an enterprise co-pilot? I was speaking to a client actually a few weeks ago, and we it just came up in conversation, and there was an opinion, oh no, you can't use co-pilot because it's public-facing. That and that organization didn't realise you can have a version of those co-pilots that are enterprise related, that are locked down to your environments, and you restrict the information that's actually available or or actually shared because it is protected. So it's those kinds of considerations that we need to educate others about. And I really hope that the likes of this podcast will start to get people thinking about oh, I haven't thought about that. I need to consider that, or maybe need to do a wee bit more research around that. And there's so many variations as well. It's trying to find the right balance that brings an effective return for your employees where it's used for good, because that's what we want for organizations, where you actually see the return on investment because it's improved a process or save time or money. Um, but there is transparency and clarity across the organization of where it's being used, you know, encourage employees to use it and to disclose where they have used it with the level of reassurance that we will review this, you know, on an ongoing basis and we will have open conversations with you about how it's impacted you and your level of work as well. I mean, I can see the value of it. But then I've been the employee, you know, I've been the manager in that relationship, and I've seen it from a strategic level where the organization's trying to introduce it. There's so many different facets of it. We have to consider all of that to make sure that it's successfully implemented. And then just to come back to your um, I suppose, hallucination comment as well, the other big risk that organisations are facing, beyond or no doubt, AI is here to please us. It wants to do a good job. I actually read an article yesterday about how you should be kind to your LLM when you're engaging with it, because sometimes threatening behaviour can produce hallucinations because it believes or perceives that it has to produce a response faster. So there's a lot of things to consider within that as well. I think making sure that you also educate those users about hallucinations and knowing when to census check. Um, I mean, you should absolutely not be using it for the likes of legal advice or HR direct advice specifically in your organization. But is that happening without your knowledge? You don't know unless you've appropriately structured the organizational approach to it. Um so that can be scary too if people believe what it's telling you and aren't thinking or using that that I suppose that creative thinking to think, actually, um, where's the facts in this? How am I fact-checking this? And is this information going to lead to a decision? And if so, do you need to speak to another human about this just to make sure that we're confident in it?

Kieran Gilmurray:

Well, and again, even that in itself, if you make a decision having based at an AI, who's to blame? You're the AI, and how's the org going to react? It is worrying, I think that is one of the risks. Uh, you know, individuals using a tool that is a sycophant, and that's it's designed to please. Because over the last year, one of the biggest risks for me personally is seeing the fact that it's moved from a business tool to a mental health and relationship advice tool. That is the top use, according to Harvard, and that in itself introduces you know a whole host of risks, psychological and otherwise. And people have harmed themselves off the back of the advice that these tools have given them. And it is concerning as well that the manufacturers are not coming out and explicitly saying that they're not a qualified, certified, medically approved mental health device, they're sort of playing at the edges of, oh, look, our tool can do absolutely everything, and that's scary as well. But again, that's part of the education piece, as you say, and not just the governance rules, but how you go about it, the culture of your organization, what you encourage, and importantly how you react when people make mistakes. So, Claire, if we were to wrap up and say, how can HR and L and D professionals create trust when rolling out AI? And what are some of the practical things that you and I might suggest to help them manage or mitigate the risks introduced by this technology?

Claire Nutt:

Yeah, a lot of what we've already highlighted, policy is key. You need to set out the behaviours expected within the organization and how you're responsibly and ethically adopting AI in your processes. So, for example, it might be quite low-level, it could be immediate responses to questions, or it could be formulating some documents that are then going to feed into strategy. If that's the case, disclosure is another thing. You need to encourage employees to make sure that they are disclosing the use of AI throughout any of the documentation that they've presented, any decisions that they have made throughout that, but also giving them guardrails and guidelines around what's appropriate and what's not, because again, they do not know if you're not telling them these are the behaviours that we expect across the organization. Also considering the type of AI that you're going to use, is it inbuilt to an existing product? Do you have licenses to use within your own organisation and your enterprise that aren't internet facing, or are you allowing employees to use other tools that are internet facing? Educating them and giving them, I suppose, the understanding of what is sensitive information and bringing it back to data protection and privacy and telling them exactly what it is they cannot share business information. I've heard a lot of stories about organizations who have found out that employees have shared very sensitive information and even when it comes to bidding for potential new business, um, that's out there. You cannot claw that back. And you can't just ask AI to scrub that from its memory and not share it with anyone else. It's in the ether. So making sure you're protecting your organization and its interests, but also protecting your employees from making those mistakes will make sure that you mitigate any of those risks. And then beyond that, training on things like how to prompt, what questions to ask, what information should you put into it, how do you use it appropriately, and how do you then sense check the information that comes back into it? And we've seen the quality difference in those two approaches. You know, asking the general I'm searching Google question, or actually asking an AI prompt that's extremely well crafted and has the right context, the right information, and I suppose the right role within that, so you get the most appropriate response. I don't know if I've missed any. What are your thoughts?

Kieran Gilmurray:

I think the big one is if you don't do all that and you're subacts, you're subject to a subject access request, or if you deal in the south of Ireland and Europe, you will be subject to the EU directive, workplace directives, GDPR, privacy acts, and everything else. You will have folks coming knocking at your door with um on abandonment, uh, and that can result in large quantities of fines, and you will learn quickly. So, to a degree, you covered everything, but this isn't uh uh an optional piece because, as you said, whether you realize it or not, your staff are using the tool. That is shadow AI. We see it in every country, every sector, you and I, Claire, all the time. It does take very little training, it does take you know to learn how to use the tool and pass it on. It does take a policy and guidance, it does take culture, it's all those things that HR professionals are particularly well suited for. And if they come along to the course that you and I are running at the end of November in Belfast, then they will learn all about this and so much more because this is an amazing tool. Well, there's risks to everything, risks when you walk out the door. We've learned to walk out the door carefully and sensibly. We learned how to use email, we've learned how to use the internet. How do we do that? You know, we've probably made some mistakes, but largely we were taught how to do it right. You and I will teach people how to do this right so they can implement one of the most amazing technologies that I have seen in my lifetime to all of their staff securely and safely. And long may that continue.

unknown:

Agreed.

Claire Nutt:

Thank you, Karen.