The Digital Transformation Playbook

From Org Chart to Work Chart: Where AI Value Really Comes From

Kieran Gilmurray

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 9:12

Many organisations equate AI activity with AI fluency, but frequent tool use does not mean work has truly changed. This episode examines why visible experimentation often masks shallow capability, inconsistent execution, and limited measurable value.

It explores how leaders can move AI from scattered usage into structured, repeatable workflows.

TLDR / At a Glance

• Usage versus true fluency
 • Overstated enterprise adoption
 • Fragmented experimentation patterns
 • Workflow integration and standards
 • AI maturity stages
 • Behaviour, judgement, and verification

Real AI fluency emerges when AI becomes part of the operating model, improving quality, cycle time, decision speed, and execution at scale.

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


What AI Fluency Really Means

SPEAKER_00

AI fluency is not what most organizations think it is. This article explores a growing misunderstanding in enterprise AI. Many organizations believe they are becoming AI fluent because people are using tools more often. In reality, most are still in early experimentation, where usage is visible but capability is shallow and value is inconsistent. The real question is not whether people are using AI, it is whether AI is changing how work gets done. This article clarifies what real AI fluency looks like, why most organizations are overstating adoption, and how leaders can move from scattered usage to embedded, repeatable execution across teams. Why most organizations overstate AI adoption? AI usage is rising quickly across organizations. People are experimenting with tools, drafting content, summarizing documents, and asking questions. On the surface, this creates a convincing narrative that AI is being adopted, but that narrative does not hold up under scrutiny. In many organizations, usage is concentrated among a small group of engaged users, while the majority of employees interact with AI sporadically or not at all. Even where usage is broader, it often lacks consistency, structure, and integration into core workflows. This is where adoption is overstated. Real adoption is not defined by access or occasional use. It is defined by widespread day-to-day application across most knowledge workers, with visible impact on how work is executed. By that definition, most organizations are still operating well below true adoption. The result is a distorted view of progress. Leaders see activity and assume capability when in reality the organization has not yet changed how it works. The difference between usage and fluency. The distinction that matters is whether AI is improving outcomes in a way that can be repeated, scaled, and relied upon. Usage alone does not achieve that. A more useful framing is to link AI fluency to workflow integration, measurable outcomes and repeatable systems. This shifts the conversation away from people using tools toward people working differently because of those tools. This is the real test. If outputs are inconsistent, methods are not shared, and teams cannot demonstrate improvement beyond isolated examples, then the organization is still experimenting. It may be active, but it is not yet capable. Fluency begins when AI is no longer an optional tool and becomes part of how work is structured and delivered. Where most organizations are stuck. The typical pattern is easy to recognize. A small group explores AI deeply and finds meaningful applications. Others use it occasionally for low-risk tasks. Some avoid it entirely. Across the organization, there is no consistent way of working with AI. This inconsistency creates friction. Outputs vary in quality, decisions require additional validation, and teams cannot rely on shared practices. In some cases, AI improves productivity. In others, it increases rework or introduces risk. The net effect is uneven performance. This is why many organizations plateau. AI is present but not operational. Without standardization, workflow integration, and shared expectations, usage remains fragmented and difficult to scale. More importantly, this stage creates a false sense of progress. Leaders may feel the organization is advancing, but underlying capability has not materially changed. What real AI fluency looks like. The real shift happens when AI becomes part of execution rather than an optional add-on. At this point, AI is embedded in how tasks are performed, not just how quickly they can be completed. AI begins to support core activities such as analysis, drafting, decision preparation, and synthesis in a structured way. Outputs are not simply generated, they are shaped, checked, and improved through consistent processes. Teams develop confidence not because the model is perfect, but because the system around it is reliable. This is where the nature of work starts to change. Tasks are redesigned, handoffs are reduced, and decision cycles become more efficient. Instead of accelerating existing work, AI enables better ways of working to emerge. The difference is visible. Teams with real fluency produce more consistent output, require less rework, and make decisions faster with greater clarity. AI stops being something people occasionally use and becomes part of the operating model. A simple maturity model for AI fluency. AI maturity can be understood as a progression through several stages. At the earliest stage, AI is used opportunistically. People experiment for quick wins, but outputs are inconsistent and depend on individual curiosity. At the next stage, individuals begin to use AI to improve their own productivity. Gains appear, but they are uneven and not shared across teams. In the third stage, AI becomes integrated into defined workflows with clear inputs, review steps, and expectations. Outputs become more consistent and value becomes measurable. At the fourth stage, teams adopt shared methods, templates, and verification practices. AI use becomes predictable and scalable, reducing variation in quality. At the final stage, work itself is redesigned around AI capability. Tasks are redistributed, decision making improves, and better ways of working are scaled across the organization. Leaders should use this model to diagnose reality rather than aspiration. The key challenge is not moving from no use to some use. It is moving from individual productivity gains to shared workflows, team standards, and redesigned execution. Why behavior matters more than tools? Tools enable capability, but they do not create it. What separates one stage from the next is how people work with AI. This includes how clearly they define objectives, how they structure inputs, how they refine outputs, and how rigorously they verify results. Fluent users do not simply use AI more often. They apply judgment more consistently. They know when to trust the system, when to challenge it, and when to override it. This is why AI fluency scales through habits and working methods rather than access to tools alone. It must be designed, reinforced, and embedded into the organization. The shift leaders need to make. For leaders, the more useful question is no longer how to increase AI usage. It is how to redesign work so that AI improves execution in a controlled and measurable way. This shifts the conversation from tools to execution. Leaders need to define where AI fits in workflows, what quality standards apply, and where human judgment remains essential. Measurement must follow the same logic. Instead of focusing on usage, leaders should measure quality, rework, cycle time, and decision speed. These indicators reveal whether AI is improving how work is done. AI fluency as an operating standard. In knowledge work, particularly where tasks are digital, repeatable, and decision driven, AI fluency is becoming a baseline expectation. Real adoption means most employees are using AI in their daily work in structured, governed, and outcome-focused ways. It is visible in how work is completed, not just in how often tools are used. Organizations that treat AI as a side initiative will continue to see fragmented capability and inconsistent results. Those that treat it as an operating standard will build systems that scale execution, reduce variation, and create more durable advantage over time. Conclusion. Most organizations are not failing to adopt AI. They are failing to embed it into how work actually gets done. The real divide is not between users and non-users. It is between organizations where AI remains occasional and fragmented, and those where it is embedded, repeatable, and used consistently across the workforce. That shift marks the point where AI stops being an experiment and becomes a performance driver. This concludes the article. You can also read this article on my LinkedIn page where I share regular insights on AI, strategy, and emerging technologies.