The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, cloud, intelligent automation, data analytics, agentic AI, and digital transformation.
He has authored three influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When I'm not chairing international conferences, serving as a fractional CTO or Chief AI Officer, I’m delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
My team and I help global businesses drive AI, agentic ai, digital transformation and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹𝗧𝗼𝗽 𝟱𝟬 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 𝗟𝗲𝗮𝗱𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗼𝗻 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝟮𝟬𝟮𝟱
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
Open Source AI’s Quiet Revolution
The ground under AI has shifted, and the tremor came from the open. What started as a niche movement exploded into a global surge of community models that now challenge the tight grip of the biggest labs.
We trace how open weights, permissive licences, and fast‑moving collaboration pushed open source AI from scrappy experiments to frontier‑level performance and why that matters for builders, researchers, and anyone who cares about power and progress.
TLDR / At A Glance:
- AI’s Linux moment and why it matters
- The Llama leak and Stable Diffusion as catalysts
- How Vicuna, Alpaca, and DeepSeek R1 closed the gap
- Costs, access, and the new builder economy
- Big Tech pivots from secrecy to selective openness
- Ethics and safety trade‑offs in open models
- Governance, licences, and transparency practices
- Geopolitics of openness and national capacity
- Key takeaways on power, participation, and progress
We unpack the sparks that lit the fuse: Meta’s Llama catalysed thousands of forks, Stable Diffusion brought high‑quality image generation to everyday hardware, and a leaked Google memo admitted what many suspected open communities iterate faster.
From there, the story accelerates: Vicuna and Alpaca showed near‑ChatGPT quality for hundreds of dollars; DeepSeek R1 stunned the field with strong reasoning at a fraction of historic training costs; and even once‑guarded players shifted, with OpenAI releasing open‑weight models and Meta deepening its transparent approach.
Along the way, we explore how Hugging Face ecosystems, BigScience, and BLOOM channeled global volunteer energy into rigorously documented, multilingual models that can be audited, improved, and redeployed.
The conversation turns to the hard questions. Openness brings accountability, reproducibility, and shared progress, but also real risks from deepfakes to targeted cyber misuse.
We discuss the emerging toolkit of governance for open models: clear usage licences, dataset transparency, safety evaluations, and public training reports that help regulators and civil society assess risks without freezing innovation.
At a geopolitical level, open models broaden national capacity, decentralise influence, and align with democratic values by dispersing expertise beyond a few corporate or state centres.
If you care about where intelligence lives and who gets to build it, this is a roadmap to the new terrain.
Subscribe, share with a friend who loves (or fears) AI, and leave a quick review to tell us where you stand on the open versus closed debate.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
Open Source AI's Quiet Revolution, How Community Models Are Challenging Big Tech. This article explores how open source artificial intelligence has rapidly evolved from a niche community effort into a major force challenging big tech's dominance. It examines how community-built models like DeepSeek R1 and Meta's El Lama series have achieved frontier-level capabilities, what this means for accessibility and innovation, and how this open movement is reshaping global debates about control, safety, and the future of AI. AI's Linux moment has arrived and artificial intelligence is undergoing its Linux moment. For years, the frontier of AI was guarded by a handful of tech giants, each keeping their models tightly closed. That era is ending. In 2025, experts estimated that open source models now trail the top proprietary systems by only about 16 months, a stunning acceleration. Sam Altman, OpenAI's CEO, even admitted his company had been on the wrong side of history in resisting open source and began releasing model code. This shift echoes the rise of open source software in the 1990s when Linux upended Microsoft's dominance by proving that community-built technology could rival corporate engineering. The same dynamic is unfolding again, but at hyperspeed. Open AI systems are emerging faster, cheaper, and more globally distributed than any technology movement before them. With this foundation laid, let's look at how this new wave of community-driven AI actually began. The rise of community-driven AI models. The revolution began quietly in early 2023 when Meta released El Lama, a large language model whose parameters were made publicly available to researchers. Though the model weights were later leaked, the event catalyzed a new era of experimentation. Developers across the world began fine-tuning El Lama for every imaginable task, from chatbots to code assistance and sharing improvements openly online. In that same year, Stability AI open sourced its image generator, Stable Diffusion. For the first time, high-quality image generation could run on consumer hardware. Stable Diffusion will democratize image generation, promised CEO Emad Mustak, and it did. Within months, millions of people were creating images without relying on closed systems like OpenAI's DALLE. A leaked Google memo later acknowledged what the industry was beginning to realize. While we've been squabbling, open source is eating our lunch. The document warned that open communities were solving problems faster than corporate research labs, often in weeks instead of months. What had started as a fringe movement became a full-fledged ecosystem of collaboration, where developers worked together rather than competing behind walls. By the end of 2023, open source AI was advancing so rapidly that even insiders began calling it unstoppable. As one researcher remarked, the frontier has moved from corporate labs to community forums. Having seen how this community movement started, the next big question is how it managed to catch up so quickly to big tech's most advanced systems, closing the performance gap with big tech. Open models have gone from experimental to exceptional. Early community models lagged far behind the likes of GPT-4, but the gap has nearly closed. Stanford's Vikuna project showed just how quickly the field was catching up. Built on top of Meta's El Lama and trained for roughly $300, it achieved about 90% of ChatGPT's conversational quality. Another Stanford model, Alpaca, reached comparable performance for under $600, demonstrating that near-frontier systems no longer required billion-dollar budgets. Then came DeepSeek R1 in 2025, a model that stunned the industry. Trained for roughly $6 million, tiny compared to the hundreds of millions behind GPT-4, it delivered frontier-level reasoning in math, coding, and language tasks. Within days of release, R1 topped app store charts and even triggered a massive market cap plunge for NVIDIA as investors realized that low-cost open models could disrupt demand for expensive AI infrastructure. Even OpenAI eventually followed suit, releasing GPT-OSS $120 billion as its first open weight model in years. The model performed comparably to several of OpenAI's proprietary offerings, marking an extraordinary leveling of the field. The difference in cost, speed, and accessibility was staggering and the message was clear. Open models were no longer good enough. They were now competitive. This new performance parity led directly to a broader shift, one focused on who gets to participate in the next era of AI creation. Democratizing AI development. Open source AI has rewritten the rules of who can build advanced technology. Startups, researchers, and even hobbyists can now fine-tune frontier-level models for a few hundred dollars. Many open models use permissive licenses like Apache 2.0, allowing anyone, even commercial firms, to adapt and deploy them freely. The ripple effects are enormous. On Hugging Face alone, tens of thousands of community-trained variations of open models have been shared. Initiatives like Big Science and Bloom have mobilized volunteers from dozens of countries to co-develop models collaboratively. This global participation spreads expertise beyond Silicon Valley, giving academics, small enterprises, and developing nations a real stake in AI's future. As the ACLU observed, the more that expertise spreads, the less AI remains susceptible to centralized control. This growing democratization of AI has not gone unnoticed, and it has forced major companies to rethink how they operate in this new open environment. Big Tech's response, opening up under pressure. The open movement's momentum has forced even the most secretive AI giants to adapt. Meta leaned into transparency early, continuing its Lama series through to Ellama 4, each iteration pushing state-of-the-art quality while maintaining open access. This strategy built an enthusiastic global developer base that now drives Meta's research forward, effectively crowdsourcing innovation. Google, by contrast, has been slower to embrace openness. Its leaked We Have No Moat memo captured the anxiety inside the company as engineers realized that neither Google nor OpenAI could maintain long-term dominance against open collaboration. While Google has released smaller models and tools, it remains cautious about sharing its largest systems publicly. OpenAI's pivot was the most striking. After years of secrecy, the company reversed course in mid-2025, releasing GPTOSS 20 billion and 120 billion under Apache 2.0 licenses. We want AI in as many hands as possible, Altman said, framing the decision as a step toward accessibility and transparency. Across the Pacific, Chinese firms have aggressively embraced open models as well. Alibaba's Quen series and DeepSeek's R1 have become global fixtures, their openness amplifying China's influence in the AI ecosystem. Meanwhile, Europe's Mistral AI has emerged as a leading advocate for open models in the West. Together, these initiatives have shifted the conversation. Open is no longer the underdog, it's the trendsetter. But beyond performance and competition, a deeper debate has emerged, one that touches on ethics, governance, and global control. Open versus closed, innovation, security, and control. The open source debate has moved beyond technical performance to touch core issues of ethics, governance, and global power. Advocates argue that openness accelerates scientific progress and ensures transparency. When models are public, researchers can audit them for bias, safety, and performance, helping to keep AI accountable. DeepSeek's decision to publish its full training process in a peer-reviewed journal was hailed as a new benchmark for transparency. But openness carries risk. Without built-in safeguards, open models can be misused for misinformation, disinformation, or even cyber attacks. The release of Stable Diffusion in 2022 illustrated this trade-off. By removing filters, it enabled creative freedom but also spawned harmful content, including deepfakes and non-consensual imagery. As a result, many open models now come with usage licenses that restrict illegal or unethical applications while preserving free research. At a geopolitical level, the question of openness is becoming strategic. Open models help nations build domestic AI capacity without depending on US or Chinese corporations. They also align with democratic principles, such as transparency, collaboration, and decentralized control, whereas closed ecosystems risk concentrating power among a few players. The ACLU has framed this divide as a battle over whether AI will foster freedom or authoritarianism. Policymakers now face a delicate balance, how to preserve innovation and openness while managing legitimate risks. And that leads to the final takeaway: the broader significance of this revolution and where it's heading next. Conclusion: a revolution that's no longer quiet. The quiet revolution of open source AI has become impossible to ignore. Community-built models are eroding big tech's monopoly on innovation, delivering frontier-level performance at a fraction of the cost. This movement is redefining not just the technology itself, but the ethics and economics of who controls it. Open source AI embodies a return to the spirit of scientific inquiry, shared progress, reproducibility, and global collaboration. The future of AAI may still be contested, but the momentum is clearly shifting toward transparency and collective ownership. As one open AI advocate put it, if we want AI to reflect democratic values, we have to build it in the open. The revolution may have started quietly, but it's getting louder by the day. Thank you for reading. If you enjoyed this piece, be sure to check out my other articles and insights on my website. There's plenty more to explore about the evolving world of AI and technology.