The Digital Transformation Playbook
Kieran Gilmurray is a globally recognised authority on Artificial Intelligence, intelligent automation, data analytics, agentic AI, leadership development and digital transformation.
He has authored three influential books and hundreds of articles that have shaped industry perspectives on digital transformation, data analytics, intelligent automation, agentic AI, leadership and artificial intelligence.
𝗪𝗵𝗮𝘁 does Kieran do❓
When Kieran is not chairing international conferences, serving as a fractional CTO or Chief AI Officer, he is delivering AI, leadership, and strategy masterclasses to governments and industry leaders.
His team global businesses drive AI, agentic ai, digital transformation, leadership and innovation programs that deliver tangible business results.
🏆 𝐀𝐰𝐚𝐫𝐝𝐬:
🔹Top 25 Thought Leader Generative AI 2025
🔹Top 25 Thought Leader Companies on Generative AI 2025
🔹Top 50 Global Thought Leaders and Influencers on Agentic AI 2025
🔹Top 100 Thought Leader Agentic AI 2025
🔹Top 100 Thought Leader Legal AI 2025
🔹Team of the Year at the UK IT Industry Awards
🔹Top 50 Global Thought Leaders and Influencers on Generative AI 2024
🔹Top 50 Global Thought Leaders and Influencers on Manufacturing 2024
🔹Best LinkedIn Influencers Artificial Intelligence and Marketing 2024
🔹Seven-time LinkedIn Top Voice.
🔹Top 14 people to follow in data in 2023.
🔹World's Top 200 Business and Technology Innovators.
🔹Top 50 Intelligent Automation Influencers.
🔹Top 50 Brand Ambassadors.
🔹Global Intelligent Automation Award Winner.
🔹Top 20 Data Pros you NEED to follow.
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 Kieran's team to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/30min
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
The Digital Transformation Playbook
The Myth Of The AI Jobpocalypse And What The Data Actually Shows
Forget the neat headline that blames ChatGPT for a white-collar collapse. We stack three heavyweight datasets - occupation-level unemployment risk, 10.5 million LinkedIn profiles, and three million university syllabi - to test the timeline and the tale unravels.
The spike in risk for AI-exposed roles began in early 2022, months before the public touched the tool. Around launch, risk stabilised. The more convincing culprits are old-school: rising interest rates and a pandemic hiring binge that needed a hard correction.
My Google Notebook LM bots pull apart the clean “AI killed white-collar work” story and test it against unemployment risk, 10.5 million LinkedIn profiles, and three million university syllabi. The timeline breaks the myth, and the data points to macroeconomics and the power of complementarity.
At a Glance / TLDR:
- Unemployment risk for AI-exposed jobs rising in early 2022, not after ChatGPT
- Why risk stabilised around launch and the Connecticut outlier
- Monetary tightening and post-pandemic overhiring as key drivers
- Graduate outcomes from 2021–2022 cohorts across tech and other high-paying fields
- Syllabi analysis showing AI-exposed skills correlating with higher pay post-launch
- Complementarity over replacement and the shift from generation to judgment
- Practical guidance on learning core skills and using AI to amplify them
We follow the canaries next: recent grads. If AI erased entry-level tasks, the classes of 2021 and 2022 should be uniquely punished in tech. Instead, we see a broader white-collar chill hitting finance, consulting, and other high-paying tracks at the same time. This isn’t an AI-specific rejection; it’s a tight, risk-averse market trimming junior headcount across the board. That context matters for anyone trying to read their prospects or redesign a hiring plan.
The real twist comes from the classroom. By matching course learning objectives—coding, synthesis, argument evaluation—to outcomes, we see that students with higher exposure to AI-performable tasks fared better after late 2022. Not worse. Why? Complementarity. AI doesn’t replace good writers and engineers; it multiplies them. Give Copilot to someone who understands architecture and they ship faster and cleaner. Give it to a novice and you get confident chaos. The value has shifted from generation to judgment: specifying, verifying, and integrating outputs with real-world constraints.
We end with clear takeaways. Stop misdiagnosing a macro downturn as a machine takeover. Double down on foundations—code structure, data modelling, rhetoric, editorial standards—and pair them with modern tools to raise your personal ceiling. If you’re a leader, design roles and training for verification and integration, not just production. If you’re a learner, build projects that prove leverage, not just fluency. Subscribe for more data-driven deep dives, share this with a friend who’s rethinking their career, and leave a review to tell us which skill you plan to sharpen next.
Link to research: AI-exposed jobs deteriorated before ChatGPT
𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.
☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray
📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK
Welcome back to the deep dive. Okay, I want you to picture something with me for a second. Imagine we're standing in this uh dark room, and floating in front of us is this massive, glowing digital timeline. It stretches back a few years, but there's one bright, throbbing red line just slicing through the recent past. November 2022.
Google Agent 1:Aaron Ross Powell The launch of Chat GPT.
Google Agent 2:Exactly. Ground zero. I mean, the story we all tell ourselves, right? On Twitter, on LinkedIn, over dinner with friends who are nervous about their careers. The story is that this date was the asteroid impact. That since that day, white-collar work has been under siege. We see the layoffs, the hiring freezes, the uh the panic in tech, and it feels like a very direct, you know, cause and effect thing. AI arrived, and boom, the software engineers and the copywriters started disappearing.
Google Agent 1:Aaron Powell It's a very clean story. It's compelling. It fits our anxieties perfectly because it gives us a villain to point to. But today, we're gonna look at a fascinating stack of research, specifically a paper called AI Exposed Jobs Deteriorated Before Chat GPT. And their data suggests that this whole asteroid narrative is, well, statistically, impossible.
Google Agent 2:Impossible. That's a really strong word. I mean, given how many people I know who are struggling to find work right now, you can't tell me the job market isn't terrible.
Google Agent 1:Oh no, the struggle is absolutely real. I'm not denying the pain in the market or the fact that it's much harder to get a job now than it was, say, three years ago. But the culprit you're blaming might be the wrong one. The core finding we're going to unpack today is that the labor market for these high-tech, these AI exposed jobs, it started crumbling months, in some cases almost a year, before Chat GPT was even released to the public.
Google Agent 2:Aaron Powell Okay, so if we're using a crime scene analogy, you're saying the victim was already bleeding out before the suspect even walked into the room.
Google Agent 1:Aaron Powell That's a vivid way to put it, but uh yeah. Essentially, yes. And to prove it, we're not just looking at headlines or, you know, vibes. We have three massive data sets to unpack. We're talking about government unemployment records, millions of LinkedIn profiles. And uh this is a cool one, a huge database of university syllabi.
Google Agent 2:Aaron Powell So our mission today is to figure out if we're basically wrongly accusing the new technology for what might just be a standard old school economic downturn. Because if we misdiagnose the disease, right, we're gonna take the wrong medicine. So let's get into the evidence.
Google Agent 1:Let's dive in.
Google Agent 2:Okay, let's start with piece of evidence number one: unemployment risk. The paper has these uh these really complex looking curves for different job sectors. But before we get into the wiggles on the graph, can you define unemployment risk for me? Is that just the unemployment rate? They sound identical.
Google Agent 1:They do sound similar, but the distinction is actually crucial here for accuracy. The researchers use data from the Department of Labor combined with the Bureau of Labor Statistics. So the unemployment rate is just a snapshot, right? It's who doesn't have a job right now. Unemployment risk in this context measures the probability that a worker will claim unemployment benefits in a specific state in a specific month given their occupation.
Google Agent 2:Aaron Powell So it controls for population. It's not just more tech workers in California are getting fired. It's more like if you're a software engineer in California in March, what are your actual odds of filing a claim?
Google Agent 1:Precisely. It strips away all that noise of population density and market size to give us a pure risk metric. It tells us how safe a specific job really is.
Google Agent 2:Okay, got it. So what about the other side of that equation? AI exposed. When we say that, are we just talking about coders?
Google Agent 1:Aaron Powell We're definitely not talking about like construction workers or chefs in the study. Yeah, AI exposed refers to occupations where a big chunk of the core task can be affected by large language models. So we are mostly talking about computer and mathematical jobs. They contrast these with low exposure fields, like you said, construction, food prep, things like that.
Google Agent 2:Okay, so we have the metric probability of filing a claim, and we have the target group people whose jobs are code and text. What does the data usually look like, you know, before all this madness?
Google Agent 1:Historically, it's a blowout. High-tech AI exposed jobs are typically incredibly safe. They have something like 20 to 80 percent lower unemployment risk than manual labor jobs.
Google Agent 2:That tracks. That's why every parent for the last 20 years has been yelling, learn to code at their kids. It was the golden ticket to stability.
Google Agent 1:Exactly. But here's the turning point, and you really need to pay attention to the date here. In early 2022, that gap starts narrowing sharply. The unemployment risk for these safe jobs, it begins to rise and aggressively.
Google Agent 2:Early 2022, hold on, let me check my mental calendar. That is almost a full year before I ever typed a prompt into ChatGPT.
Google Agent 1:Correct. By the time Chat GPT launched in November 2022, the risk levels for these jobs had already spiked pretty significantly. And crucially, right around the time the AI launched, that risk actually began to stabilize. The sharpest increase, the real damage, happened well before the tool was even out there.
Google Agent 2:That's that's wild.
Google Agent 1:Yeah.
Google Agent 2:It completely flips the script. Yeah. But hang on, I want to push back a bit. Surely there was some reaction after the launch. I remember reading endless articles about copywriters losing clients to Chat GPT in early 2023. You can't tell me that line just went flat.
Google Agent 1:You're right to be skeptical, and there is one little nuance in the data. The charts do show that office and administrative support jobs, you know, data entry, basic scheduling, the more routine stuff, they did show a slight rise in risk post-launch.
Google Agent 2:Uh-huh. So the robots did come for the admin.
Google Agent 1:Well, hold on. This is where big data gets tricky. The researchers, they dug into that slight rise and found that it almost completely disappears if you exclude data from just one state.
Google Agent 2:Connecticut.
Google Agent 1:Connecticut. What in the world happened in Connecticut? It's likely a reporting issue or a change in how they processed claims that year. It wasn't a national AI trend. When you take out that one outlier, the AI effect on admin jobs, it just vanishes from the national picture.
Google Agent 2:Aaron Powell That is a masterclass in why you can't just trust a headline number. Okay, so if I accept your premise that the bleeding started in early 2022, then who's the killer? Because people were losing their jobs. That part is real. If it wasn't the robots, then what was it?
Google Agent 1:Aaron Powell Oh, the job loss was very real. But the culprit was likely macroeconomics. The paper points to two main drivers, and they are um painfully boring compared to the sci-fi AI apocalypse narrative. First, monetary tightening. Interest rates. Right. The Fed started hiking rates to fight inflation. And you have to remember the tech sector runs on cheap capital. It's fueled by VC money, by loans. When interest rates go up, that free money just dries up. Investment slows, startups stop hiring.
Google Agent 2:And the second factor.
Google Agent 1:A massive correction. I mean, do you remember 2020 and 2021?
Google Agent 2:Vaguely. Yeah. I mostly remember being inside and buying a lot of stuff on Amazon and thinking I needed a Peloton.
Google Agent 1:Exactly. And because everyone was inside, every tech company, from Amazon to Zoom to Peloton, went on this historic hiring spree. They doubled their headcounts thinking that digital life was permanent.
Google Agent 2:The new normal, right?
Google Agent 1:Right. But it wasn't normal. It was a bubble. What we saw in early 2022 was that bubble popping, a hangover from the pandemic hiring party, compounded by expensive money, it just happened to overlap with AI's big debut.
Google Agent 2:So we saw two things happening at once tech crashing and AI rising, and our brains just drew a straight line between them.
Google Agent 1:It's a classic narrative fallacy. Correlation is not causation. But, you know, because AI is scary and new, it makes for a much better headline than Federal Reserve adjusts interest rates.
Google Agent 2:Okay, that covers people who already had jobs. But here's where I get worried, and this is the question I hear from listeners all the time. What about the canaries in the coal mine? I'm talking about recent college grads.
Google Agent 1:This is a major concern, yeah.
Google Agent 2:Because the logic is, okay, maybe a senior developer is safe. They know the legacy code, but surely AI can replace the entry-level grunt work, right? The paper even cites the CEO of Anthropic warning about this.
Google Agent 1:So what does the data say about the rookies?
Google Agent 2:To answer this, the researchers partnered with a company called Revealio Labs to analyze a truly staggering data set.
Google Agent 1:Yeah.
Google Agent 2:10.5 million LinkedIn profiles.
Google Agent 1:10.5 million. That is not a survey. That's practically a census of the professional world.
Google Agent 2:It is. They track degree completion field of study in this metric they call time to first job. They're basically counting the days you sit on the couch after graduation before you update your LinkedIn status.
Google Agent 1:Are the computer science majors still sitting on the couch?
Google Agent 2:The findings confirm it is incredibly tough out there. Graduates from the class of 2021 onwards, they entered AI exposed jobs at significantly lower rates than the years before.
Google Agent 1:2021.
Google Agent 2:Yep. The timing check holds up. That gap opened well before late 2022. The class of 21 and the class of 22 were already struggling to break into tech before the public even had chat GPT. So they were facing headwinds, but it wasn't because of a chat bot.
Google Agent 1:Correct. However, and we have to be honest with the data here. And it's exactly what the researchers wanted to test. So they ran a control. They asked, is this unique to tech and AI jobs? So they compared this trend against high salary jobs and general finance consulting management stuff that pays well but isn't necessarily about coding.
Google Agent 2:Aaron Powell Then what'd they find?
Google Agent 1:The trend was almost identical. Graduates trying to get into any high-paying role, regardless of AI exposure, they face the exact same delays.
Google Agent 2:So it wasn't an AI rejection, it was a white-collar recession.
Google Agent 1:Aaron Powell Exactly. The economy just got tighter for everyone trying to enter that upper bracket. It's just hard to get a high-paying job right now, period. Not because an algorithm can write a cover letter, it's because companies aren't hiring expensive juniors in any department.
Google Agent 2:Aaron Powell That's Well, it's weirdly comforting. I mean it sucks, but at least it means the skills aren't obsolete.
Google Agent 1:Yeah.
Google Agent 2:Which actually leads us perfectly into the next bit. Because this is the big existential crisis for people. If I'm studying coding or writing, am I wasting my time? Should I just switch my major to plumbing?
Google Agent 1:Aaron Powell This is my favorite part of the study because the answer is so, so counterintuitive. It goes against everything the Doomers are saying. The researchers used a third massive data set. Three million university syllabi from the open syllabus project.
Google Agent 2:Aaron Powell Wait, they analyzed the syllabi? Like they were literally reading what was being taught in the classes.
Google Agent 1:Yes. They looked at the learning objectives. They matched specific LLM exposed tasks, things like writing code, synthesizing information, evaluating arguments to the actual career outcomes of the graduates who took those courses.
Google Agent 2:So they could see, okay, student A took a lot of classes that taught things AI can now do. And then they looked at what happened to student A's bank account.
Google Agent 1:Exactly. They were looking for a penalty. If AI destroys the value of coding, then students who learned a lot of coding should be getting paid less, right?
Google Agent 2:That makes sense. Supply and demand. If the supply of code is infinite because of bots, the price should drop.
Google Agent 1:That's the logic. But here is the finding. Pre-Chat GPT, having high exposure to these skills, it didn't make a huge difference in your starting salary. It was pretty standard. Okay. But post-Chat GPT, after December 2022, graduates with higher educational exposure to AI replaceable skills, coding, writing, analysis, they actually had higher starting salaries and shorter job searches than their peers.
Google Agent 2:Stop. You have to say that again. You're saying that knowing how to do the exact things the AI can do made them more employable, not less.
Google Agent 1:That is exactly what the data says.
Google Agent 2:That feels completely backwards. If the AI can write code, why would you pay a human a premium for knowing how to write code? Shouldn't the value of that skill just go to zero?
Google Agent 1:It comes down to a concept called complementarity. The paper argues these skills aren't being replaced, they're being leveraged. AI doesn't make writing redundant, it makes a good writer more productive.
Google Agent 2:Can you give me a concrete example?
Google Agent 1:Think about it this way. If you give a complex coding tool, like GitHub Copilot, to someone who doesn't understand software architecture, what do you get?
Google Agent 2:You get a lot of broken code very, very quickly.
Google Agent 1:Exactly. You get confident garbage. You get a mess that you have no idea how to fix. But if you give that same tool to someone who understands the logic, the syntax, the structure, they become 10 times faster. They can verify the output, debug the hallucinations, and stitch it all together.
Google Agent 2:Aaron Powell So it's the difference between being a pilot flying a plane with autopilot versus a passenger just sitting in the cockpit hoping the autopilot works.
Google Agent 1:That is a perfect analogy. The market is rewarding the people with the foundational skills because they're the ones who can actually wield these new tools effectively. The paper explicitly warns against abandoning skills like writing and coding.
Google Agent 2:So the advice, don't learn to code because AI will do it for you, is actually terrible financial advice.
Google Agent 1:According to this data, yes, it's terrible advice. You need to know the code to manage the AI that writes the code.
Google Agent 2:This is a huge relief. It really changes the vibe from like resistance is futile to get good at the basics so you can command the ship.
Google Agent 1:It brings agency back to the learner. You're not being automated away. You're being upgraded if you do the work. The skill isn't just typing the code anymore, the skill is evaluating the code. And you can't evaluate what you don't understand.
Google Agent 2:Aaron Powell So let's zoom out a bit. We've looked at unemployment risk, LinkedIn timelines, the syllabus data. If we put these three streams together, what's the headline?
Google Agent 1:The headline is that the popular narrative is wrong. Number one, unemployment risk rose before AI arrived. This is a story about macroeconomics interest rates, post-pandemic corrections, not technology.
Google Agent 2:Right. The timing just doesn't fit the murder weapon.
Google Agent 1:Number two, graduates struggled to find work before AI arrived. That's a hiring cycle issue. It's companies tightening their belts. And number three, maybe the most important part, AI relevant education is more valuable now, not less.
Google Agent 2:Which is the big silver lining in all of this.
Google Agent 1:There's also a cautionary tale in here for researchers and policymakers. The authors warn about the danger of what they call natural experiments. It's very tempting to see ChatGPT launch in November 2022, look at bad economic data in 2023, and say A caused B.
Google Agent 2:It's lazy analysis.
Google Agent 1:It is. You cannot ignore the massive interest rate hikes and the tech bubble popping that were happening at the exact same time. If you blame the wrong cause, you'll try to fix the wrong problem.
Google Agent 2:And for the individual listener, the takeaway is pretty concrete. The effort you put into learning the hard stuff, the logic of code, the structure of writing, it's not wasted. It's actually the key to surviving this transition.
Google Agent 1:It's the new barrier to entry. AI lowers the floor for producing content. Anyone can generate text now, but it raises the ceiling for quality. And to reach that ceiling, I need those deep human skills.
Google Agent 2:I love that framing. Raises the ceiling. So we're not seeing the death of the radio star. We're just seeing the radio star get a much better synthesizer.
Google Agent 1:And maybe the radio star needs to learn how to mix their own tracks now. The job description changes, but the job doesn't just disappear.
Google Agent 2:Fair enough. It sounds like adaptability is the real skill here.
Google Agent 1:You know, this raises one final question to leave our listeners with. We've established the AI caused recession narrative is, well, it's false based on the timing, but that data on salaries is what really sticks with me.
Google Agent 2:How so?
Google Agent 1:If learning these AI exposed skills leads to higher pay now, then maybe the danger isn't being replaced by AI. Maybe the real danger is being unable to work with it. We spend so much time worrying about what AI can do. Are we focusing enough on what humans can do better when they understand the basics? Are we teaching students to be verifiers and editors of AI output rather than just creators? Because that seems to be where the money is going.
Google Agent 2:That's a provocative thought. We might need to stop training people to be encyclopedias and start training them to be fact checkers. Yeah. Conductors, even.
Google Agent 1:Exactly. The value has shifted from generation to judgment. The machine generates, the human judges, and judgment requires deep knowledge.
Google Agent 2:Well, on that note, keep learning the hard stuff, everyone. Dig into those syllabies, don't fear the code, and remember, check the timeline before you believe the narrative. Thanks for listening to this deep dive.
Google Agent 1:See you next time.