The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored four influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
AI Investments in 2026: A Promising CFO and COO Decision Framework
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
AI investment decisions are entering a more disciplined phase as organisations demand measurable results from earlier experimentation. Finance and operations leaders must now align on where AI can deliver real value.
This episode explores how CFOs and COOs can evaluate AI initiatives using a structured, outcome-driven framework.
TLDR / At a Glance
• Rising financial scrutiny on AI spending
• Gap between pilots and scaled execution
• Focus on process-heavy quick wins
• Joint CFO and COO evaluation frameworks
• Importance of data, workflows, and skills readiness
• Governance, compliance, and risk as core criteria
AI success depends on combining financial discipline with operational feasibility to deliver measurable and scalable business outcomes.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
The New AI Ownership Problem
SPEAKER_00Who in the C-suite should own AI, the Essential Leadership Guide? This article explores why the question of who should own AI in the C-suite is becoming more complex as systems move from passive tools to agents that can plan, decide and act across workflows. Rather than searching for a single executive owner, leaders now need a clear map of which AI decisions belong to which functions. After reading this article, you will understand why agentic AI creates an ownership challenge across leadership teams, what a practical decision rights model looks like, and which controls need to be in place before autonomous systems can operate safely at scale. What makes AI agenc in practical terms? The simplest way to understand agentic AI is to stop thinking of it as a smarter chatbot. A traditional tool waits for instructions. An agent is designed to pursue a goal with a degree of autonomy. That difference matters because it changes how work is done. An agent can draft, retrieve, prioritize, route tasks, trigger follow-up actions, and operate for extended periods without constant human input. As systems begin to behave like delegated actors, the ownership question shifts. It is no longer about who purchased the software, it becomes about who authorized the system to act, who set its boundaries, who pays for its operation, who monitors its behavior, and who is accountable when something goes wrong. Why the ownership debate is rational. The tension around AI ownership is not political, it is structural. Agentic systems introduce multiple types of risk and impact at once. They affect technology, processes, finances, workforce design, and data governance simultaneously. This means several C-suite roles have valid claims. Technology leaders are responsible for integration and system performance. Operations leaders are responsible for how work is executed. Finance leaders are responsible for cost and return. Risk and legal functions are responsible for accountability and compliance. Human resources and data leaders are involved where systems affect people or rely on sensitive information. Because the reality is distributed, ownership must also be distributed. A single owner model often fails because no one role has control over all the necessary decisions. Replace ownership with decision rights. The most useful question is not who owns AI, it is who owns which decisions. This approach requires leaders to define key decisions and assign them to the functions best placed to make them. Business leaders should own outcomes. Technology leaders should own architecture and integration. Security should own permissions and access. Finance should own budget controls. Risk and legal functions should own rules, escalation, and accountability. This model works because it connects governance to actual control points rather than job titles. It avoids situations where someone is named as the owner, but lacks authority over systems, budgets, or risk. The controls that matter before agents can act. The most reliable way to assess governance is to examine the controls in place. If an agent can access systems, trigger actions, or influence outcomes, certain controls should exist before deployment. Each agent should have a distinct identity. Access should follow least privileged principles. There should be clear limits on spend and usage. Approval checkpoints should exist for higher risk actions. All actions should be logged and traceable. These controls ensure that autonomy does not outpace oversight. They also align with emerging standards that treat AI systems as actors with permissions, risks, and accountability requirements. What changes across regions? Governance expectations vary by region but follow a similar direction. In the European Union, regulatory frameworks are becoming more defined, with clear timelines for transparency, accountability, and compliance. In the United Kingdom, guidance is evolving towards stronger emphasis on trust, accountability, and responsible use. In the United States, the approach is more fragmented, making standards-based governance and internal controls particularly important. Organizations operating across regions must therefore design governance models that meet multiple expectations while remaining operationally practical. Does a chief AI officer help? A chief AI officer can add value, but only under specific conditions. The role works best when it focuses on coordination rather than control. It should maintain the decision rights model, align different functions, and prevent fragmentation. If the role becomes a bottleneck for every decision, it slows progress and increases complexity. The effectiveness test is simple. If the role improves clarity and speed, it is working. If it concentrates authority without improving outcomes, it needs adjustment. How to know if the model is working. A strong governance model improves both speed and control. This should result in faster movement from pilot to production, fewer incidents, better cost management, and clearer oversight. The most practical indicators are operational. Leaders should be able to see which agents exist, what they can access, what they cost, who approved them, and what actions they have taken. If those questions cannot be answered quickly, governance is not yet functioning effectively. Conclusion The question of AI ownership in the C-suite is real, but it is solvable. The mistake is trying to assign ownership to a single role. Agentic systems operate across multiple functions. The practical solution is to assign ownership of decisions and link those decisions to real controls. This turns governance into an operational system rather than a theoretical discussion. In a world where AI systems can act, spend, and influence outcomes, that system determines whether organizations can move quickly while remaining in control. This concludes the article. You can also read this article on my LinkedIn page where I share regular insights on AI, strategy, and emerging technologies.