The Digital Transformation Playbook

AI’s Next Jobs: Four Futures For 2030

Kieran Gilmurray

This World Economic Forum white paper explores how the intersection of artificial intelligence and workforce readiness will transform the global labor market by 2030. 

It presents four distinct future scenarios ranging from "Supercharged Progress," where humans and machines thrive together, to "The Age of Displacement," where rapid automation overwhelms social systems. While business executives anticipate productivity gains and increased profit margins, there are significant concerns regarding job loss and stagnant wages. 

To navigate these uncertainties, the report suggests "no-regret" strategies, such as prioritizing human-AI collaboration and integrating lifelong learning into corporate cultures. 

Ultimately, the document serves as a strategic roadmap for leaders to align technological investment with human capital development to ensure long-term economic resilience.

TLDR / At A Glance:

• executive expectations of displacement and weak wage growth
• the two axes: AI speed and workforce readiness
• scenario one: supercharged progress with inequality risk
• scenario two: rapid automation and concentrated power
• scenario three: co‑pilot economy and hybrid roles
• scenario four: stalled progress and a bifurcated market
• no‑regret strategies for alignment, augmentation, foresight, culture, and multigenerational teams
• the policy question of distributing productivity gains

The ground under work is shifting, and not because algorithms woke up one morning smarter than us. The real pivot is whether people, teams, and institutions are ready to turn AI from a cost-cutter into a capability multiplier. We unpack a clear framework built on two volatile forces - the speed of AI progress and the readiness of the workforce - to show how four distinct futures could shape jobs, wages, and power by 2030.

We start by confronting a stark survey signal: most executives expect AI to boost profit margins while leaving wages flat, with more jobs displaced than created. From there we explore what happens when exponential breakthroughs meet a prepared workforce—supercharged growth with rising inequality risk and what unfolds when the same breakthroughs collide with skills gaps rapid automation, historic drops in confidence, and power concentrated in firms that control foundational models. Then we shift to slower, steadier paths: a co‑pilot economy where augmentation is normal, more than 40% of skills evolve, and hybrid roles thrive; and a stalled progress scenario where tools improve but readiness lags, displacement hits routine roles, and skilled trades gain value through scarcity.

Along the way, we share practical moves leaders can make now: align technology and talent strategies, prioritise human–AI collaboration over blunt automation, use predictive analytics to forecast skills, strengthen culture and ethical guardrails to build trust, and design multigenerational learning teams that pair domain veterans with AI‑native talent. The throughline is simple and urgent: the difference between abundance and fracture is human readiness, not model size.

If this conversation sharpened your thinking

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

📕 Want to learn more about agentic AI then read my new book on Agentic AI and the Future of Work https://tinyurl.com/MyBooksOnAmazonUK


Google Agent 2:

Welcome to the deep dive. So if you've been working or you know running a business over the past couple of years, you know that AI is no longer this futuristic concept.

Google Agent 1:

Trevor Burrus Not at all.

Google Agent 2:

It has moved straight into uh reshaping global workflows territory. And the speed, the sheer speed of commercialization is creating this enormous amount of uncertainty.

Google Agent 1:

Aaron Powell It really is. And that uncertainty is exactly why we wanted to do this deep dive.

Google Agent 2:

Right.

Google Agent 1:

We're looking at a foundational white paper from the World Economic Forum. It's called AI and Talent in 2030. And what's interesting is it doesn't try to predict one single future.

Google Agent 2:

Aaron Ross Powell Which is impossible, really.

Google Agent 1:

Trevor Burrus Totally impossible. Instead, it maps out four distinct, plausible scenarios for how the job market could look by the end of the decade. So our mission today is really to unpack that framework so you can understand the critical pivot points that will, you know, define the working world for you.

Google Agent 2:

Aaron Powell And before we even get into those four futures, the um the immediate tension is right there in their executive surveys. Yeah. This is the mindset right now. You'd think with a huge technological leap like AI, leaders would be planning for massive growth, new opportunities for their people. But the expectations are well, they're jarringly misaligned. Trevor Burrus, Jr.

Google Agent 1:

They really are. And it exposes this deep economic fear. When they asked executives about AI's impact, 54 percent globally said they expected AI to displace existing jobs.

Google Agent 2:

Aaron Powell More than half.

Google Agent 1:

More than half. But here's the kicker: the really critical data point. Only 24 percent expect AI to create a substantial number of new jobs.

Google Agent 2:

Aaron Powell Wait, so over half expect jobs to vanish, but less than a quarter expect them to be replaced? I mean that points to a net contraction of employment.

Google Agent 1:

Aaron Ross Powell Exactly. Or at the very least, a massive stagnation. It shows that right now, AI is being seen as a replacement technology, a cost-saving tool, not an engine for human expansion.

Google Agent 2:

Aaron Powell And this leads to what you called the most critical misalignment.

Google Agent 1:

Trevor Burrus, Jr. Right. So 44.6% of these same executives expect AI to significantly increase their corporate profit margins. Trevor Burrus, Jr.

Google Agent 2:

High profit margins, great for shareholders.

Google Agent 1:

Aaron Powell But contrast that with what they expect for labor. Only 12.1% expect AI adoption to lead to higher wages. The expectation is crystal clear for most. High profit, stagnant pay for workers.

Google Agent 2:

Aaron Powell That is a staggering gap. And it sort of sets the stage for everything else, doesn't it? It's this core tension that, if it's not addressed, is going to tilt all four of these potential futures toward uh instability.

Google Agent 1:

Aaron Powell, which is why this framework is so useful. The WEF paper builds these scenarios on two critical unpredictable vectors. The first is the pace of AI advancement. Okay. Is it going to be exponential, you know, sudden breakneck acceleration with unpredictable breakthroughs, or will it be incremental, just a steady, gradual evolution?

Google Agent 2:

So a massive shock versus a slow burn. What's the other piece?

Google Agent 1:

The other doctor is workforce readiness. This is the human element, and it's key. It asks: Will the skills we need for an AI-driven economy be widespread?

Google Agent 2:

Meaning education, businesses, workers all adapt quickly.

Google Agent 1:

Exactly. Or will it be limited where skills gaps just persist, training lags, and maybe there's even resistance.

Google Agent 2:

Okay, so two speeds for AI and two states of human preparedness. You combine those and you get four very different futures for 2030. Let's start with the most dramatic one, the quadrant where AI progress is exponential.

Google Agent 1:

This is the high-stakes quadrant, a world of massive technological shocks, where the outcome is decided almost entirely by how well we've prepared.

Google Agent 2:

So let's take the optimistic view first. Scenario one, supercharged progress. This is exponential AI meets widespread readiness. What does that world look like?

Google Agent 1:

This is the story of successful adaptation. AI breakthroughs are happening constantly, reshaping every single industry. Productivity is soaring. But the key, the absolute key, is that the workforce was ready. They successfully navigated what the paper calls the agentic leap.

Google Agent 2:

Agentic leap. Okay, let's define that. It sounds like something out of science fiction.

Google Agent 1:

It does, but it's simpler than it sounds. It just means AI moves from being a simple tool, like a calculator, to being a capable, autonomous assistant. An agent.

Google Agent 2:

Ah, I see. So you're not managing the AI's tiny steps, you're managing its goals.

Google Agent 1:

Precisely. The successful worker in this future becomes an agent orchestrator. You're like a conductor of a digital orchestra, directing a portfolio of capable AI machines. You set the strategy, the agents do the work.

Google Agent 2:

That's a very powerful, high-value role. And the paper says the economic benefits are huge GDP growth nearing double digits, corporate profits way up. But what's the catch?

Google Agent 1:

The catch is a big one. While the economy explodes, the social infrastructure just breaks under the strain. Inequality widens dramatically. Why? Because social safety nets, ethics, governance, they simply can't keep up with that technological speed. The wage premiums for those AI-ready orchestrators, the people who adapted, they've nearly doubled from what was projected just a few years ago.

Google Agent 2:

So if you can conduct the orchestra, you get rich fast.

Google Agent 1:

Very fast. But the source material is specific here. Wages actually erode in human-centric jobs. Think care, hospitality, specialized manual trades, the jobs that can't be automated but also don't scale with AI, creating this huge, visible income gap.

Google Agent 2:

So incredible growth, but with a deep social schism. Okay, let's look at the dark mirror of that. Scenario two.

Google Agent 1:

This is a socially fractured economy, and it's really driven by a lack of choice. The education and training systems failed to keep up, so businesses can't find those agent orchestrators we just talked about. And so they turn to mass automation as a desperate start gap. They're not augmenting people, they're replacing them out of necessity.

Google Agent 2:

And the speed of that displacement must be terrifying.

Google Agent 1:

It is. The share of tasks absorbed by technology surpasses 50% globally. So more than half of all the work done today is automated. And in high exposure sectors, white-collar services, data processing, that number approaches 90%.

Google Agent 2:

90%. That's not just displacement, that's a social crisis. Unemployment would have to spike.

Google Agent 1:

It does. Consumer confidence plummets to a historic low below 44, signaling this widespread economic freeze born from insecurity. And there's a critical power shift here. Power concentrates among a handful of almost state-like companies that control the foundational models.

Google Agent 2:

The huge underlying AI systems.

Google Agent 1:

Correct. And the few companies that control access to those core systems gain this unchecked influence over the entire global economy. They set the price of automation. It's an incredibly high-risk future. And it's driven not by technology failing, but by our failure to adapt.

Google Agent 2:

It's just remarkable. The exact same technological speed leads either to prosperity with social strain or to outright societal fracture. And the only difference is whether we invested in skills.

Google Agent 1:

Aaron Powell That's the fundamental lesson. It's the ultimate accelerator. If we're ready, it propels us. If we're not, it leaves us behind and very, very quickly.

Google Agent 2:

Right. So let's shift gears. What happens if AI progress isn't this seismic shock, but more incremental, a steady, gradual progress?

Google Agent 1:

Yeah, maybe after that AI bubble, the paper speculates might burst in the mid-2020s. We're now moving into futures where the change is slower, more manageable.

Google Agent 2:

Okay, let's start with the best case slow scenario. Scenario three, the co-pilot economy. That's incremental AI plus widespread readiness.

Google Agent 1:

This is the pragmatic integration future. The height has died down. Businesses realize AI is a powerful tool, but not a miracle cure. And because the workforce is ready, augmentation is the priority, not mass automation.

Google Agent 2:

Aaron Powell So the goal is to make human workers, what, 20% better, not replace them?

Google Agent 1:

Exactly. And that widespread readiness, what it translates to for you, the worker, is that AI skills, things like prompt engineering, critically evaluating AI output, they become as common as basic digital literacy was back in 2020.

Google Agent 2:

Aaron Powell And the paper says what? More than 40% of skills required for an average job will have changed by 2030?

Google Agent 1:

They will have. 40% is a massive change in just a few years. But because workers are ready, they absorb it. Routine tasks get augmented, which frees people up.

Google Agent 2:

And that creates more demand for uniquely human skills.

Google Agent 1:

Yeah. Complex problem solving, emotional intelligence, social negotiation, real managerial capacity.

Google Agent 2:

So you see the rise of these hybrid roles. The person with deep expertise in, say, medicine, who can also manage medical AI co-pilots, it sounds like a much more sustainable, less stressful transition.

Google Agent 1:

Absolutely. Productivity gains accumulate steadily. It breaks the slow growth patterns of the past without the chaos of the exponential scenarios. Workers stay in a loop.

Google Agent 2:

Okay, now for the last one. In some ways, this feels like the most frustrating scenario. Scenario four, stalled progress. Incremental AI, but limited readiness.

Google Agent 1:

This is a future defined by dashed hopes. AI is getting better, but the workforce isn't keeping up. So the real-world applications are brittle, adoption is slow, and the skills gaps are just persistent. That promised AI prosperity never really materializes for most people.

Google Agent 2:

So the overall economy doesn't get that big boost, but displacement that still happens.

Google Agent 1:

It does, but in a really patchy, uneven way. Productivity growth is concentrated in the few businesses or even geographies that happen to have the right talent. And instead of augmenting, businesses use what little automation they can deploy defensively.

Google Agent 2:

What do you mean by defensively?

Google Agent 1:

Mostly to backfill scarce talent they couldn't hire or train in time.

Google Agent 2:

And who gets hit the hardest here?

Google Agent 1:

The displacement primarily hits routine roles, early career staff, administrative tasks, they're really exposed. But here's a fascinating market reversal that happens.

Google Agent 2:

Okay.

Google Agent 1:

The value of skilled trades and highly manual jobs increases dramatically.

Google Agent 2:

Why? Because if everyone's scrambling for AI skills and failing, the plumber or the certified welder becomes an incredibly scarce resource. Their value goes up regardless of AI.

Google Agent 1:

Aaron Powell Precisely. You end up with a classic bifurcated economy. The highly skilled benefit from complexity, manual trades gain value from scarcity, but that administrative and routine middle just gets hollowed out.

Google Agent 2:

Aaron Powell We've covered all four paths now. And given that businesses can't really know which one they're heading for, I mean, different sectors could be in different scenarios at the same time. What are the actions that deliver value in any of these futures? What are the uh the no-regret strategies?

Google Agent 1:

Aaron Powell This is the most crucial part, I think. A no-regret strategy is one that pays off whether AI is exponential or incremental and whether you start with a ready workforce or not.

Google Agent 2:

Okay, let's walk through the most important ones.

Google Agent 1:

First, and it might be the easiest to overlook, is to align your technology and talent strategies. IT cannot operate in a vacuum. You can't adopt transformative tech without evolving your workforce at the same time. The two have to be in lockstep.

Google Agent 2:

So HR and the chief technology officer need to be completely inseparable right now.

Google Agent 1:

They absolutely do. Second is to invest in human AI collaboration. Prioritize augmentation over pure automation. Design workflows that require human AI teams. This is what builds trust and boosts productivity right away.

Google Agent 2:

And to collaborate, you need to know what skills you'll need next year, not just today.

Google Agent 1:

Which leads to number three, anticipate talent needs. Businesses have to use foresight, even AI-powered predictive analytics to scope out emerging capability gaps. You have to be thinking six months a year, five years out, and investing in dynamic, continuous training.

Google Agent 2:

You can't wait for the universities to catch up.

Google Agent 1:

It's way too slow. And beyond just the technical skills, the whole organizational mindset have to change. That's strategy four. Strengthen organizational culture and trust. Agility and curiosity have to be core values. When change is this fast, your employees need to feel safe to experiment, to fail, and to learn. And you need strong ethical guardrails to build trust in the AI systems themselves.

Google Agent 2:

And there was a really practical suggestion about dealing with generational gaps.

Google Agent 1:

Yes, the final point we wanted to highlight design multi-generational workflows. Set up learning teams where older, experienced domain experts are paired with younger people who are, you know, just more naturally familiar with AI tools.

Google Agent 2:

That makes so much sense. The younger generation can accelerate the tech adoption. And the older generation provides the critical wisdom and context to make sure the AI is actually being applied correctly. It closes the skill gap and the culture gap at the same time.

Google Agent 1:

And the final reminder is just to prepare for variation. The impact will be wildly different across sectors. Financial services might be planning for supercharged progress, while construction could be facing stalled progress. Your strategies have to be tailored.

Google Agent 2:

This has been an incredibly clarifying breakdown. It really confirms that the future of jobs by 2030 is. It's less about the speed of the technology itself and far more about the human capital strategies we prioritize today.

Google Agent 1:

Absolutely. The difference between the abundance of supercharged progress and the chaos of the age of displacement is entirely human readiness. We have to decide if we want AI to be a co-pilot or a replacement.

Google Agent 2:

It puts the responsibility squarely on leaders, both in corporations and in government, to invest in people, not just machines. Which brings us right back to where we started. That executive survey. High profit margins expected, minimal wage increases predicted.

Google Agent 1:

So here is a final provocative thought for you to consider. We know the current business expectation is for AI to generate wealth for the company and not necessarily for the employee. So, regardless of which scenario we end up in, how will governments and businesses develop the policy frameworks to address that resulting wealth and income disparity? How do you ensure societal stability in a world where productivity gains are high, but broad based wage growth just remains chronically stagnant?

Google Agent 2:

That is the trillion dollar policy question that will decide which future we get. Thank you for joining us for this deep dive.