Wednesday, February 25, 2026
Wednesday, February 25, 2026

Prediction Warning: The primary danger from AI is not an economy-wide jobs collapse

The Conversation’s Renaud Foucart insists the primary danger from AI is not an economy-wide jobs collapse, but a widening gap in income, power, and opportunity between those who can effectively use or own AI and everyone else.

The argument

  • Recent data show historically low unemployment in the EU, UK, and US despite rapid AI adoption, which undercuts the idea that mass joblessness is already unfolding.
  • AI is framed as another major general-purpose technology—like railways, computers, or the internet—that slowly reorganizes work rather than instantly eliminating it.

Why he says unemployment isn’t the main risk

  • Jobs explosion: Large-scale technological shifts historically create new tasks, sectors, and firms, so total employment tends to persist even as specific roles are automated.
  • Concerns about jobs: Many current “AI layoff” stories are described as firms using AI as a justification for ordinary cost-cutting, not clear evidence of a distinct, AI-driven jobs apocalypse.

The risk he thinks is real

  • The core concern is that AI amplifies inequality: some workers and firms become much more productive and wealthy, while others stagnate in low-wage, low-productivity work.
  • Research cited in the article finds that highly skilled entrepreneurs and professionals capture most of the gains from AI assistance, because using advice well is itself a skill.

“And just as there has been no immediate AI boom when it comes to economic growth, there is no immediate shift in employment. What we see instead are largely firms using AI as an excuse for standard job-cutting exercises. This then leads to a different question about how AI will change how meaningful our jobs are and how much money we earn.”

“With technology, it can go either way.”

How the inequality mechanism works

  • Lower-ability or less advantaged users are less likely to follow or leverage high-quality AI advice, so their performance improves less even when they have access to the same tools.
  • This dynamic risks splitting society into:
    • A large group using AI but stuck in poorly paid, tightly controlled jobs.
    • A smaller group of well-educated “AI bosses” who design, own, and orchestrate the systems and reap most of the rewards.

Policy and societal implications

  • The article stresses that transitions are “always hard” and that policy will determine whether AI’s gains are broadly shared or concentrated

The recommended goal is to help as many people as possible become the boss of the machines—through education, upskilling, and institutional design—rather than their servants.

If you want, the next step is to map this argument onto concrete risks you care about (entry-level roles, Gen Z, bargaining power, etc.) and see where you agree or disagree.

author avatar
Lee Cleveland
Lee is the Editor-in-Chief and founder of 2026PREDICT.com (predictionsandodds.com)—a cutting-edge platform dedicated to analyzing and tracking the accuracy of prediction markets and forecasting models.

Latest