The Digital Transformation Playbook

Why Most AI Projects Collapse And How To Build One That Works

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 36:18

AI projects are failing at a rate that should make any leadership team pause, and the uncomfortable truth is that the model is rarely the real problem. We sit down with Andy Hayler, president and CEO of the Software Industry Authority and a long time data strategist, to explain why organisations keep shipping AI initiatives that look impressive but deliver zero measurable value.

At A Glance / TLDR:

  • the 95% AI project failure rate and what it signals
  • poor data quality as the top cited cause of failure
  • why LLMs are probabilistic and why hallucinations are inevitable
  • real world examples of confident errors in legal and strategy work
  • choosing the right type of AI for the job, not defaulting to LLMs
  • what the top 5% do differently: ownership, governance, ROI, measurement
  • building AI literacy so teams know limitations and safe use patterns
  • starting small with high value use cases and scaling via proven wins

We dig into the biggest repeat offender: data quality. When teams bolt generative AI onto messy corporate documents through retrieval augmented generation (RAG), the system can only reflect the gaps, contradictions, and missing ownership already baked into the data estate. From there, we tackle the misconception that large language models behave like normal software. LLMs are probabilistic token predictors, not deterministic calculators, which is why hallucinations and “confidently wrong” answers show up in high stakes areas like law, medicine, and engineering unless you design proper human review and verification.

Andy also breaks down the “AI is one thing” myth, contrasting LLMs with machine learning for predictive maintenance and reinforcement learning breakthroughs like protein folding. The practical takeaway is an operating model: pick the right technique, define success with ROI and clear metrics, assign business ownership for data, and start small so early wins build confidence and capability across the organisation.

If you want to be in the 5% that succeed, subscribe, share this with a colleague who owns delivery, and leave a review. 

Where is your organisation most likely to be “confidently wrong” with AI?

LinkedIn: Andy Hayler

Andy's Book on Amazon UK: Beyond The Hyper: A Realists Guide to AI

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


The 95% AI Failure Shock

Kieran Gilmurray

Today we're tackling a number that should make every executive set up. MIT estimates that 95% of all AI projects fail. 95%. So the question is simple. Why are so many organizations getting this wrong? And what separates the 5% that actually create value? Today I'm joined by Andy Haler, president and CEO, Software Industry Authority, MDM pioneer, and global data strategist to answer this question. Andy works at the sharp end of data strategy and AI delivery, helping organizations move from ambition to implementation. His work sits at the intersection of data governance, architecture, and business value. In other words, where AI either delivers or disappoints. Andy and I are going to unpack where AI projects are real, whether your AI is a square peg in a round round hole, and what the 5% do differently. So if you're serious about going beyond experimentation into measurable results, this episode is for you. Let's get going. Andy, excellent to see you, sir.

SPEAKER_00

Thank you very much for having me on, uh, Kieran.

Kieran Gilmurray

Andy, MIT say that 95% of all AI projects fail. From your experience, why do so many organizations get this so badly wrong?

Probabilistic Models And Hallucinations

SPEAKER_00

Yeah, and in fact, it's not just MIT, by the way. It's a Boston Consulting Group ran a study in December 2025, came to exactly the same conclusion, 95% failure rate. And there are other figures slightly lower from other organizations. 87%, 85%, but you know, 95% is even 85% is obviously not great. I think there's a number of things. Um the the study itself, you know, asked you know the executives, you know, why they think they'd they'd failed, and that's obviously a good start. There are a number of areas that sort of came up, and I think one of them, which is a common thread, is limited or poor data quality within companies. Clearly, an LLM itself is highly dependent on its training data. But when you do a corporate project, you usually supplement the core training data of the LLM with corporate data, usually through a technique like retrieval augmented generational rack. And then what you're doing is you're adding in corporate documents like policy documents, technical documentation, product specifications, training manuals, whatever, into the into the mix so the LLM can consider those in its answer. Um now that obviously raises the question that if you're plugging in all this corporate data, and if that corporate data is kind of a mess, you know, you're you're going to potentially have an issue. So that that's that's one kind of reason, I think, why there's a problem. And obviously, that what that suggests is that companies need to be you know pay a lot of attention to data quality, which has never been a particularly fashionable area. Um, but it's isn't that suddenly it's becoming a lot more important because it's really one of the key things where your AI project's going to either founder or prosper. I think that's I think the second reason is that I think a lot of people haven't really internalized exactly how LLMs work. And what I mean by that is the we are so used to dealing with computer systems which are deterministic in nature. So, in other words, if you go into Excel and you type in two figures and you put a third cell and you say multiply those two cells together, you get an answer. And you get an answer tomorrow that's the same, and the day after, and the day after that, and the day after that. And that is not how LLMs work. LLMs are probability generating engines that predict one token at a time. A token is like a small word, but essentially it's the unit of LMs, and they predict what they think is the most plausible next token, you know, next word or whatever in an answer. And so, you know, if you ask an LLM a question, something very factual, like what's the capital of France? Um, it's got millions and millions of examples in its training data that tell you that the capital of France is Paris. So it's an extremely high chance that it will come back to you and say it's Paris. And if you ask it again, it will say Paris, and you ask it again, it'll say Paris, and you ask it again, it'll say Paris. But eventually it won't. One day it will say Canberra or Kuala Lumpa, because somewhere in its training data, which includes you know reliable sources like Reddit, which is the online forum, which is literally the single most popular source for LLM data, by the way, training data, then eventually someone in there will have made some horrendous you know blunder and and it will decide that, well, you know, today's the day I'm gonna pick the you know 730th most likely response. Um, and so so one of the key reasons we're saying this is that clearly because we're used to systems which behave deterministically, people just kind of assume that that's the way it's gonna be with with LLMs and they're and they're dead wrong. And so they they therefore try to apply a probabilistic solution to something that often requires a deterministic answer, and and if they do that, they're gonna be disappointed. And there are many, many, many, many um examples that I'd be very happy to share. I've come across in my sort of research for the book, which kind of illustrate that. Just a trivial one, there's a lot of um legal cases now where the lawyers have used Chat GPT to generate court submissions down quoting precedent cases. And it turns out that, of course, LLMs hallucinate at, you know, roughly one in five of answers at the time, it varies by the model, it varies, you know, how much training data they've got. But as a ballpark, you know, one in five is not not far off. So typically in these cases, if they submit say 20 precedent cases, you know, kind of four of them are going to be hallucinated and made just made up. Um, and that's started to be noticed by judges. So the last time I looked, there were 230 legal cases in the US that have been referred for to for penalties for the for the lawyers involved who submitted things you know in you know to a to a judge. And in fact, just literally last night I was talking to a friend who was a judge in the UK, and he said there had been a UK case where the judge himself had submitted a very long judgment and it turned out to have hallucinated cases. So this is obviously, you know, this is an area where you know the the propensity of large language models to you know to basically make things up when they don't know the answer, because they're probability engines, is something which is a problem. And if you're in an area like the law, where people require some level of consistency and certainty, then it what it says is you, you know, this is not a great area to be doing it. Or at the very least, what it means is you've got to have some sort of human in the loop. And I think that's the sort of key, you know, recurring theme on many of these things, is you you can't be trusting an LM to produce the same answer time and again.

Kieran Gilmurray

You need to have humans checking them, which is rather going to disappoint a lot of people, and as you say, it's not just law, it's medical science, or I kind of want my aircraft engineers not necessarily prompting how to actually fix something. So it is AI a a square peg and a round hole then for many companies or projects.

SPEAKER_00

I think one thing's really worth saying, and it doesn't get said enough in the press and media, I think is that people bandy around the word AI as if there's one kind of AI. They're talking about ChatGPT, when in fact AI is kind of a bunch of things. So, for example, machine learning has been around for quite a long time and doesn't hallucinate in the same kind of way that LLMs do, um, and is very good at spotting patterns in data. So it's often used for predictive maintenance, for example, on vehicles or in factories. And in fact, some of the use cases of of AI, which you read about, some of the ones that are very successful, if you dig into them, it's it's quite curious how often it turns out to be a machine learning example, not actually an example at all. So machine learning has certainly got its place and can be very, very useful in a whole bunch of uh cases. To take another type of AI, reinforcement learning, that's the technology that's used by DeepMind, the Google subsidiary, when they were producing initially sort of game playing tools like Alpha Zero for chess and Alpha Go for the game of Go. Um but much more usefully, the thing called Alpha Fold, which is something that predicts the way that proteins unfold in three-dimensional space. And that sounds very obscure, but it's an enormously important area in drug discovery where people are trying to predict what sort of compounds might be candidates for a new type of drug. And um the the this particular type of learning, which is absolutely nothing to do with LMs whatsoever, you know, turns out to be extremely effective in in certain cases. And this alpha fold is probably one of the probably one of the most useful advances that AI has delivered in in in recent years. So that's the first thing to say is that there's different kinds of AI. There are even other ones, but I think that's just gives you the point. So the first thing is say what kind of AI are you talking about? Are you talking machine learning? Are you talking LLM? You're talking reinforcement learning, you're talking something else. That's the very first sort of you know angle to choose, you know, before you sort of get into anything else.

The One Mistake Before You Start

Kieran Gilmurray

Um so it could be the wrong AI is the square peg in around talk. So if if if we've talked about some of the things like AI going wrong, you know, you know, making sure that you understand determinism, uh probability models of probabilistics. What are what one move, what or what one mistake do you think pops most projects on a path to failure but before they even start? What what would you really call out?

SPEAKER_00

And I'm narrowing you to one because I'm sure there's a lot I I really believe that it's the it's not it's not realizing that LLMs are probabilistic, is the single thing that will will cause them to be applied to areas which they will fail at, you know. And I I think I I I was you know a couple of years ago guilty of this myself on a project where I was using it for something which it I thought naively it will be good at around textual analysis. Um but again I I eventually, after you know, more time than I would care to admit, using four different AIs, I I hit uh a barrier and I just think dawned on me that hang on a minute, you know, this is never gonna work. And and and I was you know, I was doing it myself. And so it's it's very easy to slip into a mode where you're just you're just so used to computers that give you consistent answers. Um, and you know, there are many problems in in business where you you think, oh no, that you know, I'd like to analyze a whole pile of data or a whole bunch of documents, and uh that you know, that'd be AI sounds great for that. And then you also got to ask that caveat, and do you want the same answer every time? Do you want the same answer day after day after day? And you know, are you confident that the data is good enough that you're you know that it's gonna have a very low rate of hallucination? You know, basically AI is hallucinate, which is an anthropomorphic term, which not everybody likes, but it's it's so common, I'm gonna stick to it. Um, essentially they they hallucinate when they haven't got complete enough training data. So they they are the the way that internal training works with LMs, there are three stages of training which I bore you with, but essentially they are rewarded for complete, confident answers. So rather than saying, I have no idea, which is what the human might say, well, that's not the way that the LM training typically goes. So they so they are heavily rewarded for providing conclusive answers. And so if they don't have sufficient training data to find those answers, and they then they don't have enough supplementary data that's maybe been provided to them to answer it, they will come up with something very, very plausible. And in fact, and then just as an anecdote, the very, very first question I ever asked Chat GPT, and it wasn't because I was trying to trim it up because I barely knew sort of what it was at the time, was I had in front of me a chess tournament schedule. I used my played some some chess, and that I happened to have been in sort of 30 years ago, and so I had the results of the of the tournament in front of me. So I asked, I was just curious as to whether you know an LLM could would even know about such a thing, you know. So I asked this question about tell me about this tournament and you know who won it and what so on. And he came out with an incredibly plausible answer. He said, Yes, it was held here and these dates, here is the winner, here are all the results, here's the cross table of results. And for a minute, I was, you know, moment, I was very impressed. I thought, oh wow, this is tremendous. You know, you know, I looked at it more closely. Well, hang on a minute, there's that isn't quite right. You know, some of the players are right, but some of them are not, you know, they were and but they weren't bad, they weren't just invented names, they were all players from that era who that it somehow came up with, and it so it filled in the sort of bits you didn't know with extremely plausible answers. So you if you didn't, you know, really have the exact answer in front of you, this would look like an extremely you know profound and perfectly put together answer. It just happened to be wrong. And I think that the thing that that sort of did sort of talk teach me was that you know, when you're asking them a question that you you don't know the answer to, and you're using it for research, you've got to remember that you know, how sure are you that the answer it's coming back with, you know, is is correct. Because you know, if you ask it in areas where you do know a lot about, you know, you know, it's again, you know, one in five answers are made up, roughly. Yeah. So um very confidently correct and very confidently wrong. Confidently wrong is is the is the key is the key message. And so you've just got to be aware of that when you're you know thinking about how you apply it into to projects and thinking about review state stages, putting in processes where where you know human experts can check it. So if you are that that look legal firm, you know, producing this thing for the judge and you decide that you're gonna use an LLM to save time, because I'm sure when they do that, I'm sure they bill the clients much less when they use the LLM than they would have done before, because it reflects the massive time savings. Um, then if they do that, then they they do need to factor in, you know, a significant time for somebody to go back and cross-check. And in fact, I mean, just the other day, a few weeks ago, Deloitte found this to their cost when they did a project for the government of Australia and they produced a very lengthy sort of multi-hundred page document, strategy document, 440,000 Australian dollars. So, you know, it's quite a lot of money. And anyway, it turned out full of quotes and cases and examples that when they somebody knew the area, dug into it, went, hang a minute, that's that's not true at all. This doesn't exist at all. But you know, Deloitte's have happily charged the full the full amount. Now they they eventually they did some kind of partial refund because the thing made it into the national press, you know. Um, but that was just another example where what they they should have done is say, okay, fine, well, we're you know, we're going to use some to help write it because they they write very fluently, you know, maybe maybe they write better than some of their junior you know consultants. I don't know. But what they should have done is to take some more of those junior consultants to do a very, very detailed line-by-line fact check and to strip out any errors before it got anywhere near the client. But they didn't. And and as I say, because the LM answers tend to be very confident and they're very fluent writers. I mean, they write very, you know, you know, it's not it may be sort of it may not be Shakespeare, but it's very, it's certainly very, you know, very capable text that's you know free of typos and and so on. Um it tends to, you know, it looks authoritative. And it's very, very, very important to understand that you know some of those things will not be, you know, correct. And you need to be to be checking absolutely everything. And you need to put a process in place to make sure that that happens. And yeah, that will cost you, you know, sort of more than it with than if you could just press a button and it just generated it. But that's you know, there's nothing for free in this world, you know. As the as the as you know, some of my favorite movies, The Princess Bride, said that, you know, life, you know, life is disappo life is pain, and anybody who tells you different is selling something. And in the case of cons consultants who are selling you$440,000 reports that they produced in five minutes, um, then you know, then yeah, there is there isn't there isn't such a thing as a as a as a as a free productivity gain. There may well be some productivity gain, but it comes with some you know some additional safeguards and checks that you need to build into the process.

What The Successful 5% Do

Kieran Gilmurray

And this is one of the challenges I see in the world, trying to get AI in, you know, the standard operating procedure isn't there, isn't so standard, the data isn't so good. We haven't trained our people. And let's be honest, 99.99999999% of the population are not Shakespeare, and therefore every little bit of advantage they can get is useful. So, Andy, what do the top five percent do differently then?

SPEAKER_00

Well, I think firstly being very, very aware of what the things can and can't do, I think the second thing is that they apply the the the right AI to the right problem. So, in the case of someone who wants to look at, say, you know, factory production lines, sort of the sensor data to sort of spot problems with predictive maintenance, they understand enough that they're going to use machine learning for that, and they're not gonna try to throw an LLM at it. So, so understanding you know what kind of AI there is, and and sometimes whether AI is just not appropriate at all for the solution, that's also very possible. So that's I think one of the first things is that they get that right. I think the second thing is that they put a lot of effort into reviewing their data quality, so they'll typically have you know strong sort of data governance, there'll be the extensive use of data quality technologies, some of which uses AI in itself to generate business, you know, sort of data quality rules and business rules, for example. So if you've got something where you're using the right tool on a very good set of data for a task that's appropriate, then then it's gonna hopefully you know work quite well. Um, and and unfortunately at the moment that's showing up at about 5% of the cases. You know, there was a report just literally yesterday in the in the Wall Street Journal that was quoting the chief economist of Goldman Sachs, and he asked, What's the what's the been the productivity effect of AI on the you know US economy? And he said, Well, it as far as we can tell, it's zero. Oh, wow. Well, well, just a mere zero, I think was the precise phrase.

Kieran Gilmurray

Um the productivity of the average worker is as I say that gently. And I wonder, like we do look at AI and we say it's 95% correct. I've made a comment uh the other week to someone that who was quoting me the same figure. I said, Well, if you can find me an employee who's 95% correct, give them to me, I'll train them. If they're going to get it right 95% of the time, they're much better than most of the population.

SPEAKER_00

Yeah, well, 95% would be good, but what's what we're saying is it's 5%, not 5%.

Kieran Gilmurray

Yeah, and that's that's whoops. That's just the fact that someone is at least 5%, you know. Well, I I won slightly higher than 5% paying someone's wage. But so if if a lot of people uh currently to date think that it's the model of success, because you see lots of pick open AI, pick Claude, pick, you know, and everything's focused on that conversation. But but in reality, then from what you're describing, how important is data quality, how important is processes, how important is our people?

SPEAKER_00

Uh well, data quality, I say, comes up as the number one quoted reason for AI project failure by the practitioners. And that's been in study after study. It was mentioned in the MIT study, but as I said, there have been at least four or five different ones. And data quality always seems to be the highest one. So having robust data quality processes in place, and what we mean by that is having strong data governance, data ownership, so people actually business people owning the quality of the data, not deferring it to the IT department, um, but actually taking ownership of it, having processes in place that that measure data quality, using technology to help with that. And so data quality industry itself has taken advantage actually of AI, not necessarily LLMs, but mostly machine learning, um, in order to scale up to higher levels. Because in the past, you know, if you go back five years, people would typically invent data quality rules by you know by hand. They would put them into a either a some sort of uh catalogue or some sort of data quality tool individually. So they say, okay, this this this value here has to be, you know, this field here has to be within a certain range, or this thing here has to be an integer, or this thing here has to be delivered within 30 days, or whatever the whatever the rule is, you know. And that was all done by hand. Um now, what a lot of tools these days can do is to generate at least suggestions for these rules by by actually observing the data. So machine learning algorithms will observe months and months or years and years of data. Data and spot patterns in that data, and they can therefore detect which things are anomalous. Um, so for example, and you've got to be very careful here because you might take something where you get a typical order value and you say, well, okay, something that's very low or very high, we need to make an alert for that to suggest that's a candidate error. But if you're saying a seasonal business, you know, the ice cream business or the pumpkin business in November or October rather, um, then you know, you obviously there will be seasonal spikes. But I mean, machine learning algorithms are clever enough that they will be able to spot those kinds of things as as normal, and they won't issue sort of false, false positive alerts, at least not anywhere near as many as used to happen when humans are doing it. So you can use these modern data quality tools, and there are a bunch of them now, to help you really sort of both implement data quality, but also to do it at scale in a way that was quite expensive and difficult before. So I think having these robust tools in place is important. And I think with with people, I think having a much higher degree of AI literacy in terms of training programs is really, really important. As I've mentioned before, I think most people are not that familiar with with the sort of the idiosyncrasies, if you like, of LLMs. You know, I met someone I won't name, but recently who was uh actually a consultant for a well-known manager, uh well-known management of consultants firm who allegedly worked in AI and didn't know what an hallucination was. I was absolutely gobswacked. I couldn't believe I had to explain it a couple of times. I thought I must have been you know saying the wrong word. Were you hallucinating? Yeah, had no idea. You can't be if they don't do that, you you it's surely not, you know. And as someone advising companies on AI strategy, you know, so so you know, never mind people in in just regular jobs, so that's that's not their job to understand the internals of of L's, you know. But so but if you if you but I think you know, if you put enough, you know, at least some some level of training in place, then you can start to help people, you know, spot these the kind of the you know, the pros and the cons, the idiosyncrasies, and and make you know fewer errors, basically, when they're they're using uh LMs and and other you know AI techniques. And and it's just rolling that kind of literacy program out, I think, in itself, you would save an enormous amount of trouble. And probably you know, significantly reduce you know this this kind of currently really quite ridiculously high failure rate that we're we're seeing in the industry at the moment.

ROI First And Start Small

Kieran Gilmurray

Yeah, I think I was slightly disappointed when I I heard the EU Act watered down the necessity to have AI education, you know, in in position. I could be wrong, but I hope to goodness I hallucinated when I read that, but I might not have been. So what what companies would you recommend? Or what what what would you rec what steps would you recommend, Andy, companies take to get AI to be successful? And I suppose define what you mean by success as well.

SPEAKER_00

Well, well, I think for a start, you know, I I was grow I grew up in two very large companies, Exxon and Shell. They were had very robust sort of project evaluation processes. So absolutely everything in the case of Exxon, more than$50,000, um, had to go through a cost-benefit analysis that produced a you know net present value and IRR payback period. And people would, I was just, this is how I was trained. So absolutely any IT investment or any investment, oil rig or whatever, had to have a robust, you know, return on investment analysis. And and the AI should be like the AI investment should be just the same, less like any other investment. So the first thing I think is to step back and say, you know, what's the return on investment analysis? You know, if you were going to apply AI in these various areas, you've got maybe a whole bunch of different candidate projects you could use things for. Some things are probably gonna be easier than others, and some things are gonna have a higher return than others, even though they may be more difficult perhaps. But in in you know, putting a robust return on investment methodology in place will start to help you prioritize which projects are gonna you know give you the best bang for the buck. Um, so I think that's that's you know, it shouldn't really have to be said. But I mean, having worked as a as later when I set up a software vendor working with other companies, I realized that you know almost nobody did this. I mean, it was I was absolutely flabbergasted when I started working with companies that weren't exoner, you know, that that this was not completely the norm everywhere. And I was having to explain ROI and IRR and MPV and then this is like a speaking different language. But I mean, so that's one thing I think, putting it as a robust, you know, investment case forward. Um, you know, next next thing would be to say to to put in place, you know, good dead literacy, you know, AI literacy programs, as we've discussed, I think this is very, very important because people are gonna you know come across all kinds of issues. And the the more education they've got about at least the basics of how these technologies work, the better chance they've got of navigating. And as I say, again, listening to the projects, these, these, you know, these large studies that have been done on on AI projects, you know, really re-examining the the whole area of data quality data governance to really make sure that you've got a solid data governance programme. I mean, it's it's quite disturbing in a way that I've been working in data quality and master data management for many, many years, that the every single year there's a survey done by one or other of the big sort of firms, whether it's Deloitte or KPMG or Accenture or whatever, and people ask, you know, a couple of thousand executives, how much do you trust the quality of data in your organization? And, you know, to be honest, the figure's never budged, you know. It's basically it's always around 30 to 35 percent of executives trust the data in an organization. You know, it's never 20, 20, it's never 40, it's always in the range 30 to 35. And it's been like that for a decade, to be honest with you. And that suggests that we mean we're not doing this really well, because if you're saying that AI projects depend on high quality data, and yet only one third of executives trust their own data currently, you suggest that you know, if you're gonna put some effort into doing something, fixing the air quality might be a really good place to start, you know, this because this will have an impact. And it will have an impact outside of the AI projects as well. I mean, having better quality data will help you avoid a whole bunch of problems, you know, and usually can be you know fairly easily cost justified. It's just been one of those Cinderella subjects that's not been really given any, you know, very high level of attention. I mean, nobody ever wanted to be promoted to be data quality manager. You know, it's this doesn't happen, right? You always wanted to be you know something, but that that was never the the trendiest or most glamorous of areas. But in the in the world of AI, where AI is utterly dependent on the data that they see, then data quality suddenly should be much higher on that agenda and should be much more prioritized.

Kieran Gilmurray

Yeah, I haven't seen, as you say, too many executives standing in front of the mirror, going mirror, mirror on the wall, who's got the dirtiest data of them all, but important. What other final advice would you give to companies who are who are you know thinking about AI, you know, bailing the scale, really want and definitely do need to get AI working, but just aren't quite at the races yet?

SPEAKER_00

Um I think I think you know there's some general advice which you probably give to any new technology, which is you can start, you can you can be think about setting up some kind of internal sort of centre of excellence where you take a bunch of people who are dedicated to the job, who you know get in a very high level of training and can start to test things out and evaluate tools and to use them in anger and then train those people up so that they can advise other projects, you know, you know, and a sort of support basis. And that's something which is true of you know, this is this would also be true if you were doing any all kinds of other different, different different projects, by the way. But it's but it's generally true. And again, very generic advice, which has always served me so well in my career, is that you know, you don't try not to boil the ocean, as the as the saying goes, you know, the people often have these incredibly grandiose plans for whatever the latest technology is, and and that's fine, but you need to sort of you know think big but start small, you know, and p pick off you know the low-hanging fruit projects where you are going to get some success. And that comes back to this whole return on investment methodology, where if you can see and actually analyze the likely impact, financial impact that a project is going to have, then you what you find is there will be some things which have a high financial impact and actually aren't that hard. Other things are going to have a low financial impact and are really hard, you know, and other things that are you know in the middle. And so it makes sense to begin with taking it easy and trying to do the things that have some impact fairly quickly, and then you learn on those projects and you deliver success. You know, you deliver something that's practical. And then people say, Oh wow, yeah, that project over there, that did really well. Maybe we'll have some of that. And I mean, because in general, I think having worked for two huge organizations plus you know my own software company, I would say that most in very large organizations, people are typically somewhat reluctant to deal with change. You know, they don't necessarily like to change their existing processes and are often quite resistant to change. But what they will see, if they see some colleagues in another area, another department or another line of business, you know, getting a success with something, they will willingly follow it rather than resisting. So I think again, this approach of doing sort of small manageable projects which can deliver, you know, not necessarily huge but quick, you know, return and making successes of those, publicizing those successes, communicating those successes to colleagues, then you'll you'll sort of start to build, you know, build up a sort of momentum behind this. Um, and I think that's that I say this is advice that it would be true of almost any any new technology. It's not really specific to AI, but it but it with again that it tends to be you know sort of old you know, old but fade advice, really.

Book Plug And Final Checklist

Kieran Gilmurray

Yeah, the basics never change, have they, despite the tech. And I was plus one everything you've said there. And again, small additions, which is make sure that the project's actually pointed at the North Star, i.e., the business goals. It's helping fulfill the business vision, not just AI theatre to be ticking a box to say you've done AI, of course. And all those small things build up the big things, they all compound. But not only that, as you say, Andy, it builds confidence, it builds capability, it gets you the win, it frees you some time, it gets you some cash, keeps people confident, and then you can go on to the next and the next and the next. Andy, very quickly before we close, you've a book out though, and we're talking about education. Show us the book and tell me a little bit about it. It's an excellent book, by the way. I've read it cover to cover.

SPEAKER_00

Yeah, yeah. Yes, I mean what I'm trying to do in this book is to help educate people about not the really technical internals of you know, gradient descent or anything, but but talking about how LLMs work in a sort of practical level, talking about this really coming back to this thing about probabilistic versus deterministic, sort of nailing that sort of down, and then talking about how AI is then can be applied in different industries. So talking about the different types of AI, talking about how that's work, seeing real examples from industry after industry after industry. And then the book also covers some more general issues around sort of AI legislation. You may you mentioned the you know, EU Act, for example, though there are others around the world, and another general issues around sort of AI ethics, and then talking a little bit about the the future of AI or possible futures of AI. And so it's you know it's intended for really anybody who's you know really wants to understand AI better, you know, either in at work or in their own personal life, um, and who wants to be more successful with with it, basically. And I think there's something in that book for most people, and it's written for a general audience rather than a sort of highly technical audience. And you know, I you know, I I think it's I I hope it's uh useful to uh to people that buy it. You can order it on Amazon, you can you can order it. There's uh some other platforms as well, Bonds and Noble and and other online platforms in America, for example. But I mean essentially the the obvious thing is to order it on Amazon. So yeah, it's just called uh um yeah, beyond the hype, a realist guide to AI by Andy Haler, just published in 2026. Yeah, and it delivers.

Kieran Gilmurray

It really does. I loved it. I read it then, then I handed it to my son who's currently reading it, and he's going, first time I've seen a book that gives me proper examples. So there's the testament alone. So look, folks, here's the bottom line. Look, AI doesn't fail because the models are weak, it fails because the foundations are weak. So unclear ownership per data, vague success metrics, tech first thinking, grand projects with no defined return. The 5% do the opposite. They choose the narrow, high-value use cases, they match the right technique in AI to the right problem. They define what success looks like in commercial terms, they assign real business owners, they measure relentlessly. So before your next AI initiative begins, just pause for a second. Audit your use cases, be honest about your data quality, define the ROI case, align the stakeholders, start small review constantly. AI is powerful, but only when it's applied with discipline. Andy, thank you so much for the insight and the clarity today. And thank you to everyone for listening. If you want to be on the 5%, focus less on the model and more on the operating model. We'll see you next time. Thank you very much, indeed, for having me, Kieran.