The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
Adoption Is Not Absorption
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Boards are seeing more AI activity than AI-driven transformation. This episode examines why usage metrics can create confidence while operating models remain largely unchanged.
It explores the gap between AI adoption and true organisational absorption.
TLDR / At a Glance
• Adoption versus absorption
• Usage metrics and false confidence
• Task productivity versus system productivity
• Shadow AI and governance risk
• Workflow redesign as value driver
• Board-level absorption dashboards
The core takeaway is that AI value appears when workflows, controls, decisions, accountability, and outcomes change in measurable and governed ways.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
High Adoption Low Absorption
SPEAKER_00High adoption, low absorption. Why AI usage metrics mislead the board? Boards are being shown more evidence of AI activity than evidence of AI transformation. Licenses are being activated, pilots are being launched, employees are experimenting, and dashboards are filling up with usage data. Yet in many organizations the operating model has barely changed. This article explores why AI adoption is not the same as AI absorption. The argument is simple. Adoption shows that people can access and use AI. Absorption shows that AI has changed how work is designed, governed, measured, and converted into value. The adoption story looks better than the operating reality. The headline numbers make AI progress look impressive. Research shows that a large majority of organizations are now using AI in at least one business function, with generative AI usage also widespread across enterprises. On the surface, this suggests AI has moved from experimentation into mainstream business use, but the value picture is far less convincing. The same research also shows that only a minority of organizations are seeing measurable enterprise level financial impact, while relatively few have fundamentally redesigned workflows around AI. That is the real issue. AI is being adopted faster than it is being absorbed. This matters because boards can easily mistake movement for maturity. A growing number of users, pilots, prompts, and training completions can create the appearance of progress. But if workflows, controls, decision rights, and value measures remain unchanged, the organization may simply be busy with AI rather than becoming meaningfully better because of it. Adoption is not absorption. AI adoption is evidence that the technology has entered the organization. It includes licenses, access, prompt volume, pilots, training sessions, and experimentation. These indicators are useful, particularly in early stages, because they show whether people have started using AI. AI absorption is different. It appears when AI has been integrated into the operating model itself. Absorption becomes visible when workflows are redesigned, decision latency improves, rework decreases, controls become embedded, and accountability for AI assisted work becomes clear. Adoption changes behavior at the edge of the organization. Absorption changes the operating model. This distinction matters because individual productivity does not automatically become enterprise value. An employee may complete a draft more quickly, but if the approval process remains unchanged, the customer journey may not improve. A team may generate more analysis, but if decision rights remain unclear, decisions may still be slow or poor. AI absorption occurs only when the surrounding system changes. Why usage dashboards create false confidence? Many AI dashboards are built around the easiest metrics to measure. These include active users, licenses, prompts, training completion, and pilot counts. These metrics are not useless, but they are incomplete. They show exposure to AI rather than conversion into business value. The danger is that boards receive a picture of AI progress that appears precise but is strategically shallow. A dashboard showing rising prompt volume may look positive, even while approval bottlenecks, rework, and customer response times remain unchanged. High usage may include casual experimentation, duplicated effort, low-value prompting, or employees using AI outside approved processes. It may reveal very little about whether AI has improved cycle time, quality, cost to serve, risk posture, or customer outcomes. This is why the board question must change. Instead of asking how many people are using AI, leaders should ask where AI has changed the way work gets done and what measurable outcome improved as a result. Why local productivity does not always become enterprise value? One of the biggest traps in AI reporting is the assumption that local time savings automatically become enterprise value. They often do not. Time saved at task level can be absorbed by review effort, rework, coordination, additional output demands, or bottlenecks elsewhere in the process. This is the difference between task productivity and system productivity. AI may help an employee write faster, summarize faster, or analyze faster, but if the workflow still depends on slow approvals, fragmented data, unclear ownership, or manual exception handling, the end-to-end process may barely improve. This is why workflow redesign matters so much. Research consistently shows that workflow redesign is one of the strongest factors associated with measurable business impact from generative AI. The value is not created simply because people use AI, it is created when work is rebuilt around what AI changes. The hidden risk of high adoption. High adoption can also create hidden risk. When employees use AI without approved tools, clear policies, or proper controls, organizations may gain speed while losing visibility. Shadow AI is already widespread in many environments. Employees often use public tools outside official systems when formal workflows are too slow or restrictive. This creates governance problems. Sensitive data may be entered into unapproved systems. AI-generated outputs may be used in customer or client work without review. Decisions may be influenced by tools that are not monitored or governed. The issue is not that employees are wrong to experiment. Often they are responding to real workflow friction. The leadership failure occurs when governance, systems, and guidance lag behind actual behavior. What absorption looks like in practice? A genuinely absorbed AI use case looks very different from a disconnected pilot. It has a defined workflow, a business owner, a measurable baseline, a clear outcome, a control model, and a path to scale. Absorption becomes visible when workflows are redesigned, approvals change, review burden falls, cycle time improves, ownership becomes clearer, and controls become embedded in how work actually operates. In customer service, this may mean AI supports triage, drafting, escalation routing, and quality review inside one governed workflow. In finance, it may mean AI supports forecasting or controls testing with clear data lineage and human approval points. In human resources, it may mean AI assists workforce planning or employee support, while fairness, privacy, and escalation rules are built directly into the process. Absorption is not simply AI being used. It is AI changing the workflow in a measurable and governed way. What boards should measure instead? Boards should still monitor adoption, but adoption should sit inside a wider absorption dashboard. The objective is to understand whether AI is changing the business rather than simply whether people are touching the tools. A stronger board dashboard should include adoption depth, workflow absorption, cycle time, quality and rework, decision latency, realized value, risk coverage, shadow AI exposure, workforce impact, and scaling rate. This changes the conversation from activity to conversion. It gives boards a more accurate view of where AI is working, where it is stalled, and where risk is increasing. The leadership shift now required. The next phase of AI leadership is not about proving that employees can use AI. That phase is already underway. The harder challenge is proving that AI is being absorbed into the organization in ways that improve performance. This requires leaders to focus resources on fewer and better defined workflows. It requires accountable owners rather than isolated experimentation. It requires governance embedded directly into workflows rather than written separately after deployment. And it requires boards to challenge dashboards that show activity without showing value. The organizations that succeed will not be the ones with the most pilots or the loudest AI narrative. They will be the ones able to demonstrate with evidence that AI has changed how work gets done, and that those changes are producing measurable, governed, and sustainable value. Conclusion AI adoption is real, but AI absorption remains limited. Boards should be careful not to mistake licenses, pilots, prompts, and training completions for transformation. The real test is whether AI has changed workflows, decisions, controls, accountability, and outcomes. The board's responsibility is therefore to ask better questions, not simply who is using AI, but where AI has been absorbed into the operating model and what value can be proven as a result. The organizations that outperform in the AI era will not be those with the most visible adoption. They will be the ones that redesign their operating model deeply enough for AI to become part of how the business actually runs. This concludes the article. You can also read this article on my LinkedIn page, where I share regular insights on AI, strategy, and emerging technologies.