The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
Workslop: The Hidden Tax of Generative AI at Work
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Generative AI is accelerating output across the workplace, but not all of it is useful. This episode examines workslop, the hidden cost of polished content that slows real decision making.
It explores how AI driven output shifts effort from creation to verification and impacts organisational performance.
TLDR / At a Glance
• Definition of workslop in AI workflows
• Incentives driving output over judgement
• Time loss and decision delay costs
• Trust erosion across teams
• High risk workflows and examples
• Governance, metrics, and 30 day action plan
The key takeaway is that AI value depends on operating design, not output volume.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
What Work Slop Means
SPEAKER_00Work Slop, the Hidden Tax of Generative AI at Work. This article explores a growing failure mode in workplace AI. It focuses on output that looks polished but is not ready for decision making. Work slop occurs when AI makes it easy to produce drafts, notes, and summaries, while the real effort is pushed to others who must verify and correct them. After reading this article, you will understand why work slop is increasing, which workflows are most exposed, how to measure its cost, and what leaders should do in the next 30 days to reduce it without slowing useful adoption. Why work slop is rising now. Recent changes have shifted where AI operates. AI is now embedded directly into documents, spreadsheets, presentations, and meeting systems. Output is not only generated more easily, it is increasingly inserted directly into workflows. At the same time, organizations are under pressure to demonstrate AI usage. This creates incentives that favor output volume rather than judgment. When performance is linked to visible AI usage, the risk increases that people generate more content than is actually useful. The result is a structural problem. It is not about individual behavior, it is about how incentives, tools, and workflows interact. What work slop actually costs. The most visible cost is time. In many cases, the person creating the output saves minutes, while the person receiving it loses significantly more time verifying and correcting it. The more important cost is decision delay. A summary without context, a presentation filled with filler content, or meeting notes without clear decisions may appear complete. The failure happens later when someone has to clarify, correct, or redo the work. This is why work slop often hides inside teams that appear productive. Output increases, but real throughput does not. There is also a trust cost. When people begin to doubt the reliability of shared outputs, every document requires additional checking. Review time expands across the organization, reducing overall efficiency. Which workflows are most vulnerable? The most exposed workflows share a common pattern. They allow AI to produce something that looks complete before anyone verifies whether it is useful. This includes project updates, strategy presentations, status reports, meeting notes, executive summaries, internal briefings, and early stage client communication. Meeting notes are a clear example. When notes are generated automatically at scale, they can quickly become polished but low-value records unless someone adds real decisions, ownership, and next steps. Security and triage workflows highlight a similar issue. When automated outputs increase faster than review capacity, the bottleneck shifts to verification. This creates overload rather than efficiency. Where AI genuinely helps. Work slop is not an argument against generative AI, it is an argument against unmanaged use. The strongest gains appear in workflows where outputs are easy to verify, humans remain accountable, and the system is positioned as support rather than replacement. This is why organizations are focusing more on evaluation and real workflows rather than controlled demonstrations. The question is not whether AI works in theory, it is how it behaves under real conditions where decisions matter. Why this is a governance problem. Training alone is not enough. Configuration matters. When systems are set to generate and insert content automatically, leaders cannot rely solely on individual judgment. Governance must be built into how tools are deployed. This includes decisions about defaults, permissions, and where AI outputs can be used. As AI becomes more embedded in work, governance becomes part of operating design rather than a separate compliance activity. A 30-day leader plan. The first step is to define what decision ready output means for key workflows. A document should not be considered complete unless it includes context, clear ownership, next steps, and a level of verification that allows action without guesswork. The second step is to configure systems deliberately, decide where automatic generation should be enabled, where it should be limited, and who has permission to use advanced features. The third step is to introduce a lightweight quality check before content is shared. Every output should make clear what is verified, what requires further validation, what decisions are being requested, and who is responsible. The final step is to change metrics. Instead of tracking volume, track rework, clarification cycles, time to decision, and error rates. These measures reveal the real cost of work slop. What leaders should measure. Time saved alone is not a reliable measure of value. A faster draft that creates additional follow-up work is not productivity. More meaningful measures include rework time, decision speed, clarification volume, error rates, and usage within approved systems. For higher impact workflows, leaders should also track whether outputs are supported by evidence, whether they have been reviewed by a responsible owner, and whether there is a clear record of how they were produced. These measures provide a more accurate view of performance. Conclusion. WorkSlop is not primarily a problem of model capability. It is a problem of operating design. When AI makes it easy to generate plausible outputs, organizations that lack standards, ownership, and review processes will see volume increase faster than quality. The response is not to restrict AI use. It is to build a simple quality system, define what good output looks like, configure tools carefully, measure what matters, and assign clear accountability. Organizations that do this will still benefit from AI driven speed. They will simply avoid mistaking output for value. This concludes the article. You can also read this article on my LinkedIn page where I share regular insights on AI, strategy, and emerging technologies.