%20.jpg.jpg)
The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, cloud, intelligent automation, data analytics, agentic AI, and digital transformation. He has authored three influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI and artificial intelligence.
๐ช๐ต๐ฎ๐ does Kieran doโ
When I'm not chairing international conferences, serving as a fractional CTO or Chief AI Officer, Iโm delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
My team and I help global businesses drive AI, agentic ai, digital transformation and innovation programs that deliver tangible business results.
๐ ๐๐ฐ๐๐ซ๐๐ฌ:
๐นTop 25 Thought Leader Generative AI 2025
๐นTop 50 Global Thought Leaders and Influencers on Agentic AI 2025
๐นTop 100 Thought Leader Agentic AI 2025
๐นTop 100 Thought Leader Legal AI 2025
๐นTeam of the Year at the UK IT Industry Awards
๐นTop 50 Global Thought Leaders and Influencers on Generative AI 2024
๐นTop 50 Global Thought Leaders and Influencers on Manufacturing 2024
๐นBest LinkedIn Influencers Artificial Intelligence and Marketing 2024
๐นSeven-time LinkedIn Top Voice.
๐นTop 14 people to follow in data in 2023.
๐นWorld's Top 200 Business and Technology Innovators.
๐นTop 50 Intelligent Automation Influencers.
๐นTop 50 Brand Ambassadors.
๐นGlobal Intelligent Automation Award Winner.
๐นTop 20 Data Pros you NEED to follow.
๐๐ผ๐ป๐๐ฎ๐ฐ๐ my team and I to get business results, not excuses.
โ๏ธ https://calendly.com/kierangilmurray/30min
โ๏ธ kieran@gilmurray.co.uk
๐ www.KieranGilmurray.com
๐ Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
The Future Isn't Just Gen AI - It's Ethical, Orchestrated Intelligence
From academic curiosity to business transformation - Peter van der Patten's journey through AI spans decades, beginning in 1989 when he first explored the fundamental nature of intelligence. As Lead Scientist and Director of Pega's AI Lab, Peter brings a unique perspective that bridges theoretical understanding with practical implementation.
[Sponsored]
The conversation reveals how AI has evolved from isolated research projects to become deeply integrated into enterprise operations. Peter articulates a vision where intelligence isn't merely a technological overlay but woven into every customer interaction. This philosophy has guided Pega's approach: creating a horizontal platform where different AI technologies - from process mining to speech recognition, decisioning engines to generative models - work in concert within broader business ecosystems.
What emerges is a nuanced view of enterprise AI implementation. The most successful deployments blend multiple AI approaches, combining the predictability of traditional decisioning with the flexibility of generative capabilities. Take insurance claims processing: automated systems can assess approval likelihood or fraud risk while agentic components gather information and synthesize findings - all within structured workflows that maintain regulatory compliance.
This orchestrated approach addresses one of generative AI's central challenges: hallucination. By embedding large language models within established decisioning frameworks and governance structures, organizations can harness creative capabilities while maintaining accuracy and accountability. Peter emphasizes that ethical principles like explainability and transparency become even more critical as AI systems grow more complex and autonomous.
Looking toward the horizon, Peter identifies research agents as an early "sweet spot" for agentic implementation - tools that gather and synthesize information across sources while posing relatively low risk. The next frontier? Multi-agent collaboration, where AI systems must negotiate, understand each other's goals, and work cooperatively across organizational boundaries. Join us for this fascinating exploration of AI's past, present and emerging future - and discover why effective implementation demands more than just advanced technology.
๐ Watch Peter's interview on YouTube - https://youtu.be/W01zmYX-K8A
#PegaWorld2025 #AI #AgenticAI #PegaWorld #AILiteracy Pegasystems #ad #Sponsored
๐๐ผ๐ป๐๐ฎ๐ฐ๐ my team and I to get business results, not excuses.
โ๏ธ https://calendly.com/kierangilmurray/results-not-excuses
โ๏ธ kieran@gilmurray.co.uk
๐ www.KieranGilmurray.com
๐ Kieran Gilmurray | LinkedIn
๐ฆ X / Twitter: https://twitter.com/KieranGilmurray
๐ฝ YouTube: https://www.youtube.com/@KieranGilmurray
Peter, for those who don't know you, would you mind giving us a little bit of an introduction to you, your role at Pega and what you're up to at this moment in time?
Speaker 2:Yeah, absolutely Well, thanks for having me, by the way. So I'm Peter van der Patten. I'm a lead scientist and director of the AI lab at Pega, so I'm responsible for AI innovation both at our clients. But we also need to not eat our own dog food but drink our own champagne. So also, how can we apply to our offerings and go to markets and really innovate them with AI?
Speaker 1:Yeah, I love the way you say drink your own champagne. I was warned about telling people to eat dog food years ago, I remember. So let's go back a little bit, because you and I were talking beforehand about your career in AI, so bring us back in the AI world. How did you get into it? How did you get up to today? Before we start talking about some of the things you're doing, today.
Speaker 2:Oh wow, so we have to keep this to.
Speaker 1:At least a day, at least a day Okay, okay, yeah, no, no.
Speaker 2:So I started to study AI in 1989. So in 1989.
Speaker 1:So you know, you're not dating yourself.
Speaker 2:I recognize the years and there was more about. I had this fascination of what is intelligence, where's the coming from? You know, these are questions that you know, the Neanderthals have been there, probably. And then in the mid 90s and when I went to into industry, I really wanted to work in a I saw, worked in a small research company where we did all kinds of very wild things with AI, search engines, video analysis, you name it. So I thought, you know, blasรฉ as it was, I thought I had seen it all.
Speaker 2:But then in 2002, I came across a company and Rob Walker, one of my current colleagues as well, and he said well, we're really focusing on, you know, making sure that if you have a our models let's, you know, assume that we have those as a given. How can you really bring that into an interaction, into every customer interaction, so that has really impact? Yeah, so not some project in a corner of a company, but really something that you insert into every single customer interaction. I was like, oh, that sounds interesting. Yeah, so that that was that whole field of decisioning, next protection and then ultimately with, with pega, yeah, I got into this role that I I have today and I'm really looking at AI horizontally. We're a platform for AI and automation, so for me, ai is not just a machine learning. It could be process mining, speech-to-text real-time decisioning, a generative AI, agentic AI.
Speaker 1:So bring that to life a little bit when we're talking about, maybe, some of the customer examples, because there is that transition, isn't there? Moving from academia where the research is the fun part, the investigation, and now we're into actually. We need to commercialize that as well. There's a little bit of tension there always.
Speaker 2:Yeah, yeah, yeah, I think yeah. So being an AI guy, you know I would love to say it's all about the smarts of the AI, but the level of adoption it's quite often more about how can you embed it into the wider kind of ecosystem or architecture or infrastructure. So let's say, in that marketing example, it's great that you can predict what type of products or recommendations people will be interested, but you also have your own marketing strategies, your own policy that you need to take into account. So it will immediately become more of a mix of business rules and machine learning and predictive analytics that you require to build a system where you can really say well, we will plug them into every single customer interaction. But even more fundamentally, it also then requires that the company maybe becomes more customer-centric and is thinking from a customer point of view. So there's lots of different things in terms of capability, but also in terms of vision or organizational change that need to happen to make AI a and this is just the marketing. The same goes for customer service or intelligent automation.
Speaker 1:Every single team. So bring us over the last couple of years in PEG itself, because there's been a lot of changes, because even over the last 12 months since I was here to this year, it's extraordinary. But bring us back roughly 2021 to 2025 and we'll try and shrink that one in as well Some eons ago Some eons ago in AI terms. Yeah, yeah, yeah, cool.
Speaker 2:Yeah, I think around that time. So we had a lot of I'm biased, but I think we had a lot of success with the combination of, at that point in time, the combination of workflow and real-time decisioning, not just in the marketing space, but what we did. We also said, well, if you can make automated decisions in marketing and personalization, why not use that across any type of workflow or process? If there's an insurance claim that you can decide, okay, which agent needs to work on this or will we likely approve this claim, et cetera, right, so that was one of the first things that I looked at. It's like, okay, how can we bring decisioning to these other? Well, to any type of workflow application. But also, around that time, we acquired we did acquire a speech-to-text company, we acquired a process mining company, and so you start to think about how can you really have a horizontal view where all these technologies, ai technologies are, are kind of working together in an overall narrative, and how do you link it then also back into automation to bpm, robotics, case management. So well, not not boring.
Speaker 2:And then gen ai happened. Actually, I had access that thing exactly and so I had access to uh, gpt, uh, roughly, roughly two months after the first GPT-3 paper came out, more than a year and a half before, there was public access, general public access to GPT. So I had been playing with Genitiv AI for quite a while, and even before that through my job at the university. So that meant, yeah, I think half year before ChetGPT came out we were already in discussions with the management team, with Alan, the CEO, and Karim, head of product, etc. We really need to get into Gen AI. So that allowed us to get into Gen AI kind of really early, but also now into this next step, which is more around the Gen AI.
Speaker 1:So split that down a little bit, because that is one of PEG's unique advantages the decisioning plus the workflow. Because as excited as we are about generative AI, generative AI hallucinates. So what you end up with is it looks very pretty on the surface, but actually the answers you're getting out can be often are confabulated, they're made up, and if I'm sitting in a regulated industry like a bank, then I'm going into policy issues, I'm going into governance issues, the regulators knocking on my door. So there's just explain that to the audience, because I think that needs picked apart, because I think now people are so excited by Gen AI they believe, put chat GPT in. I'm now a Gen AI company and I'm exaggerating just a little bit, you know so again, just but they bring that to life a little bit, if you don't mind. Yeah.
Speaker 2:I think maybe a running example will help here if we stick with that. Stick with that, that example of, let's say, an insurance claim, because it's just the most exciting thing in the world. But it's an important moment both for the insurance company but also for the customer. That's why they are your customer. Help them in times of need.
Speaker 1:The one thing they never want to do. I have insurance. I never want to use it. Now I need it, Now I need it. It better work.
Speaker 2:You want it to be as seamless as possible, right. But that could be or should be. It's not just a matter of throwing chat GPT at it, right. So you need to orchestrate different types of workflow, maybe with different types of agentic use. It could be an overall. You need to orchestrate the overall first notice of loss process how that's called.
Speaker 2:That could be a structured process, but as part of that process, you may want to call out to decisioning to figure out oh, maybe we already can see that we would likely approve this anyway, or oh no, there's actually a human that needs to work on this. It's too complex. Okay, which human, which team do we need to send it to? Which human do we need to send it to? And during the process, there might also be real-time decisions around. Should we escalate this? Is there a likelihood of fraud or leakage, or can we recover to a third party? And so there are all kinds of decisioning points in that process.
Speaker 2:Now, with generative AI, we can use, let's say, more classical generative AI to summarize the situation or, at the end of the case, like, summarize what was the entire process we went through. But that's quite passive use and scripted use of generative AI. More active use would be if we use an agentic approach. So you can imagine that claims agents maybe the human claims agents is supported by a claims agent that's gathering information, that's enlisting maybe other agents that work on behalf of this agent and maybe to look at these areas of leakage, fraud, etc. But it's always been a mix between human work, very regulated business rules, left-brain AI like prediction of likelihood of fraud or things like that, which is not a Gen AI model with Gen AI and agentic use.
Speaker 1:Yeah, because I think that makes it predictable. It does, but that's a bit that confuses people. I think they go it's Gen AI, it's predictable AI, it's narrow AI, and never the twain shall meet. But that combination is key. So, again, just looking at agentic AI because I'm thinking of changing my name to Ciaran Agentic Agentic is that popular at this moment in time? Bring that to life to me and talk about the various agents and what they're doing. And why do we actually need agents in the workflows that now exist? Why can't we just get humans to do it? Why can't Gen AI do it? Are we pushing them in or do they actually have a valuable purpose?
Speaker 2:Yeah, no, they do have a valuable purpose. So how do these agents actually work?
Speaker 2:Let me, because they sound like little magical wizards but essentially how you configure them is there's an element of instructions like what is your goal? What type of problems can you solve? Which type of problems you're not meant to be used for? What kind of data do you have available? What kind of knowledge sources can you tap into? What type of actions can you take? Any of those actions? Do we need to ask for permissions to be able to execute them? And that could link then also back into these other types of artifacts like a set of government business rules, like a predefined workflow, like a particular integration that you can call if you have the right security rights to do that. So it's really blending the power of these agents, because they are powerful in the sense that they can interpret context and understand goals and also can have some idea when to execute what kind of tools, with the reliability of these very predictable tools that sit underneath the business rules, the workflows, whatnot.
Speaker 1:And that's what you're saying. There is key, though, as well, if I just pick one part out, because very often at the moment, we're seeing single agents and they're meant to do everything, but realistically, you need multiple agents all working concurrently, but they're not to be left alone to their own devices. We're very much talking about putting governance and rules and responsible AI around that. Could you tear that apart a little bit for folks as well, because that ethical, well-governed AI is absolutely essential.
Speaker 2:Yeah, yeah. So yeah, when you look at and in that sense, the, let's say, the requirements that you have for that, for agentic AI, they're similar to requirements that you would have for other forms of AI as well. Right, so it's because people see new technology, they think everything is going to be different, but it's the same common sense rule of explainable AI. Do I understand how an AI actually has reached a conclusion? It's accountability. So accountability in this case means that you also need to hold the agent accountable, in the sense that it needs to be auditable. How did the agent get to a particular state, step or conclusion? You need to be able to track back to see how the agent has done that.
Speaker 2:Otherwise, you can never do a proper audit or respond to a customer request if they're not. If they, they call you up and say why, why, why are you not covering my insurance claim here? So it's the, I think, these ethical principles of explainability, of privacy. So, again, you don't want a bunch of rogue agents and just let them lose and they can access everything and anything. It's the accountability that you can, that you have the same or maybe even more kind of transparency of what are these different actions and it's similar how you would do it with humans or with workflow or with other forms of automation and, as you can really see, okay, how did I get to this particular conclusion? So all these same principles actually apply.
Speaker 1:Yeah, you'll be disappointing people. Now I thought you could just throw an agent at it, but it is so key isn't it?
Speaker 1:And, and if not, actually the explainability needs to be more transparent just to keep people's belief in this new technology. So just to finish off, to round up, today as well so we've talked about, like, where we've gotten to. There's a heck of a lot of progress over the last couple of years. If we're sitting here in 12 months time at PegaWorld 2026, what is what's going to happen? What do you see as the product areas, the agentic areas, the AI, the gen AI, the ethical side of things? Where are we going in the journey in the next 12 months? What should people look out for or watch out for?
Speaker 2:I think there's two dimensions. One is where does the technology go, and the other dimension is more the actual adoption. If there's one lesson we know, that technology can be out there for like 10, 15 years already and only then the real adoption takes place. Now we see with AI that's going a lot quicker, but there's still a difference between technology being available and them actually being adopted. So I think that's where we will see a lot of progress over the coming year, where organizations will actually a lot of progress over the coming year, where organizations will actually start to use these agents and get some real-world feedback on which use cases work well, how did we mitigate particular risks? Which use cases didn't really turn out that well? So where is the sweet spot?
Speaker 2:For example, research agents are agents that combine information from different sources and then synthesize it, and we've had a lot of success with an internal agent that we built, I think something like almost like two years ago now Iris the intern. We can ask her all kinds of questions. She has 20 different tools and knowledge sources and it's used by between 1 thousand and two thousand people per day. At Pega. We're like a five thousand six thousand people outfit, and so that's a good like, early signal, like, oh, research agencies might be a good example of a sweet spot use case makes sense because place to the strength of generative AI and but also the risks are much lower because the only action they take is gathering information and synthesizing it.
Speaker 2:So I think we're going to see quite a lot of progress in the actual adoption. We're going to see lots of interesting breakouts next year when people tell what worked and what didn't work and what they learned from that. I think on the technical side of the house, we'll see that. Yeah, I think this whole area of multi-agent and how do they collaborate, how do they negotiate, how do they even understand what the other agent is supposed to do, which might be easy if they all live in one system, but if I call out to agents on other systems, then yeah, then they need to reverse engineer what the other agents actually believe or what their goal is. I think those are a lot of interesting, interesting, open, open, open questions well, we won't be bored over the next 12 months we will not be bored.
Speaker 1:Look, thank you so much indeed for sharing all that knowledge. I have to say it's exciting, and it's interesting to watch, because everybody assumes agents are new now, but, as you're describing, these have been around for years. I'm surprised before now we haven't worked out more of the rules. But, like everything, 2025 was the year of large language models. 2025, 2026, 2027 should be the age of agentics. The interesting bit is how do we get great agentics, great large language models, great AI and great people to work together to produce great businesses?
Speaker 2:And work your ethics back into it. I'm glad you brought up that point, so I think that's also really important.
Speaker 1:Well, that's the key bit doing business the right way. So, thank you so much indeed. You're welcome. Enjoy the rest of the conference and hopefully, to goodness, we'll be sitting here in 12 months time talking about all the other amazing things we've done. Thank you so much indeed.
Speaker 2:Thank you.