A familiar scene is playing out in boardrooms. Leaders talk about “AI savings” as though the point of the technology were simply to remove people from payroll. Some firms are already doing exactly that, announcing job cuts with “AI and automation” as part of the rationale. (Challenger Gray & Christmas)
The irony is that this is the least interesting advantage AI confers. The bigger advantage is capability: the ability for the same team to produce work that is faster, deeper, more tailored, and more ambitious. In fields as varied as writing, customer support, software engineering, and high-end consulting, controlled studies show meaningful jumps in speed and quality when people are augmented with modern generative AI tools. (PubMed)
That changes competition. Today's “wow” becomes tomorrow's baseline. The firms that keep talent and amplify it will begin to operate on a different performance frontier, while the cost-cutters discover that margin improvements are easy to copy, but capability advantages compound.
This paper sets out the evidence, explains the dynamics of the coming capability race, and offers a practical leadership agenda for firms that want to win it.
The new executive temptation: cost out, not capability up
When a technology arrives that can draft, summarise, analyse, and code, the first reflex is understandable: “Where can we take cost out?” Investor pressure, high interest rates, and the legacy habit of treating technology as an efficiency lever make that reflex feel rational.
It is also incomplete.
The labour market is already absorbing the shock. In the United States, Challenger, Gray & Christmas reported 108,435 announced job cuts in January 2026, and noted that “Artificial Intelligence” was cited for 7,624 of them, about 7% of the month's total. They also reported that firms referenced AI for 54,836 layoff plans in 2025, and 79,449 since 2023, when the firm began tracking AI as a stated reason.
Meanwhile, survey evidence suggests adoption is broad but still shallow. A February 2026 NBER working paper surveying almost 6,000 senior executives across the US, UK, Germany, and Australia finds about 70% of firms actively use AI. Yet executives' personal use averages only 1.5 hours a week, with a sizeable minority reporting no use at all. More strikingly, over 80% of firms report no impact on employment or productivity over the past three years, while expecting modest productivity gains and small employment reductions over the next three.
“Savings” are legible. They can be booked. They show up quickly in the margin line. Capability is messier. It takes redesigning work, retraining people, and living through a transition in which some productivity gains are reinvested rather than “taken”.
So why the rush to cut? Because “savings” are legible. They can be booked. They show up quickly in the margin line. Capability is messier. It takes redesigning work, retraining people, and living through a transition in which some productivity gains are reinvested rather than “taken”. In other words, it requires leadership.
What AI actually does: it makes cognition cheap
If the Industrial Revolution mechanised muscle, the AI revolution mechanises parts of cognition: drafting, pattern-matching, retrieval, synthesis, and first-pass analysis. That does not eliminate the need for humans. It changes the economics of what humans do.
The most useful way to think about the technology is not “replacement” but “a fall in the cost of producing a good first draft”. That sounds trivial until you remember what a modern organisation actually runs on: memos, analysis, proposals, tickets, presentations, emails, specifications, plans, reports, and decisions.
When first drafts become cheap, three things happen.
First, output expands. People do more because the bottleneck moves from production to judgement.
Second, quality rises. Good practitioners become faster without lowering standards, while weaker practitioners are pulled up.
Third, the frontier shifts. Teams attempt work that was previously too slow, too expensive, or too complex to justify.
This is not conjecture. It shows up in field experiments.
- •In a preregistered experiment with 453 college-educated professionals doing midlevel writing tasks, access to ChatGPT reduced time by 40% and increased output quality by 18%. (PubMed)
- •In customer support, a generative AI assistant increased productivity by about 15% on average, with larger effects for less experienced agents. (OUP Academic)
- •In a controlled experiment with 95 professional developers, those using GitHub Copilot completed a coding task 55% faster on average than those without it. (The GitHub Blog)
- •In a field experiment with 758 consultants using GPT-4 on realistic consulting tasks, AI users completed 12.2% more tasks, finished 25.1% faster, and produced results rated over 40% higher quality for tasks within the tool's capability range.
The pattern is consistent: AI often amplifies human performance. It does not simply substitute for labour hours. It changes what “good” looks like and how quickly a competent professional can reach it.
Even sceptical institutional voices acknowledge the scale of the potential. Goldman Sachs Research argued in 2023 that generative AI could raise global GDP by 7% over time and expose the equivalent of 300 million full-time jobs to automation, while also noting that many roles are more likely to be complemented than fully substituted.
Notice the words. Exposed is not eliminated. Complemented is not redundant. The disruption is real, but it is not as simple as “the professional classes are about to be wiped out”.
The quality inflation effect: “wow” becomes baseline
Leaders often underestimate a brutal dynamic in competitive markets: when productivity tools spread, the benefits are not captured as leisure. They are captured as expectations.
Email did not make work slower. Spreadsheets did not reduce the need for analysts. PowerPoint did not cut the volume of presentations. Each tool raised throughput and raised the bar.
AI accelerates this dynamic because it compresses multiple white-collar skills at once. Drafting, summarisation, ideation, analysis, translation, and coding assistance now sit behind a text box. The “wow” of 2023 is already the baseline for competent teams in 2026.
A firm can cut 10% of staff and enjoy a short-term improvement in margins. But if competitors use AI to increase the sophistication of their work by 30% to 100% and to ship new products faster, then your margin gains buy you a quieter decline.
BCG's research is blunt about compounding advantage. It finds that only 5% of firms are “future-built” for AI, while 35% are “scalers” and 60% are lagging. The future-built group, BCG says, achieves five times the revenue increases and three times the cost reductions that others get from AI. These leaders reinvest returns into stronger people and technology capabilities, widening the gap.
The OECD warns that early adopters can secure benefits that grow quickly and can become self-reinforcing, partly because AI systems can generate data that improves future performance.
In that world, the main risk is not that you fail to cut enough cost. It is that you fail to build enough capability while the gap is still bridgeable.
The false dichotomy: massacre or UBI
The popular narrative oscillates between two poles:
- •AI will take the jobs, the professional classes will be hollowed out, and society will need UBI.
- •AI will create new work, nothing fundamental changes, and fears are overblown.
Both are comforting. Both are wrong in isolation.
The best evidence suggests restructuring with painful churn.
The IMF estimates that AI could affect around 40% of employment globally, with higher exposure in advanced economies. It also stresses that outcomes will vary: some workers will benefit through higher productivity, while others face reduced labour demand and wage pressure, with risks of widening inequality.
The World Economic Forum's Future of Jobs 2025 release projects substantial job disruption by 2030, alongside net job creation, and reports that many employers plan to reduce workforce size due to automation while also investing in reskilling.
The OECD's 2024 cross-country work on AI in the workplace is revealing in a different way. It reports that large majorities of workers who use AI say it improves performance and enjoyment, yet sizeable minorities worry about job loss and about loss of autonomy and intensification of work.
Put these together and you get a clearer picture:
- •Many tasks will be automated or partially automated.
- •Many roles will be redesigned around higher judgement, deeper customer understanding, and better orchestration of tools.
- •Some occupations will shrink, particularly those heavy in routine drafting, standard analysis, or repetitive coordination.
- •New roles will grow, including AI product management, model risk, workflow design, data stewardship, prompt and evaluation engineering, and domain-specific “AI operations”.
- •The transition will be uneven, and “uneven” is another word for “brutal”.
The professional classes are not facing extinction. They are facing a reshuffle in which competence becomes cheaper, output becomes abundant, and differentiation shifts to taste, judgement, trust, and the ability to define the right problems.
Why the cost-cutter is optimising for yesterday's sport
If AI makes certain outputs easier to produce, then cost-cutting looks smart only if the market values the same outputs at the same quality. It will not.
In many industries, customers will not pay you simply for producing a report, a plan, or a proposal. They pay for insight, speed, confidence, and relevance. As AI diffuses, those attributes become table stakes.
The firms that treat AI as a headcount lever will tend to do three things:
- •They harvest the savings rather than reinvesting them. This is the quickest way to show short-term improvement and the quickest way to fall behind in capability.
- •They keep workflows intact and swap humans for tools. Yet evidence suggests that redesigning workflows is a key factor in unlocking real value.
- •They create an organisational immune response. Employees see AI as a threat, not a partner. Adoption becomes covert, fragmented, and low quality.
The firms that treat AI as a capability lever do the opposite. They keep and upgrade talent. They redesign work. They share the productivity dividend between customers, employees, and shareholders. They build a culture where AI is normal rather than feared.
This is not just a philosophical stance. BCG's workforce-focused guidance is explicit: it argues that most AI value comes from people and operating model changes, and describes a “10-20-70” split in which the bulk of value comes from workforce changes rather than algorithms alone. It also reports that future-built companies plan to upskill more than 50% of employees on AI, compared with 20% for laggards.
It is difficult to outperform a competitor that is systematically raising the capability of its people while you are shrinking yours.
The real strategic shift: from labour substitution to ambition expansion
A useful mental model is to separate efficiency gains from ambition gains.
Efficiency gains are about doing the same work with fewer resources. Ambition gains are about doing better work, attempting harder work, and expanding the scope of what you can deliver.
Most early AI programmes are stuck in efficiency mode because it is easier to explain. “We will automate X and save Y” fits in a spreadsheet.
Ambition is harder to quantify. It also tends to create second-order effects: new products, new customer promises, new levels of service, new markets. That is precisely why it is strategically decisive.
Deloitte's State of Generative AI in the Enterprise (fourth edition, based on a global survey of 3,235 leaders in mid-2025) shows the tension. Many organisations report productivity improvements and cost reductions from GenAI, but far fewer report that they are already achieving revenue growth, and only a minority say they are deeply transforming or creating new products and services.
That is the story of the transition period. Most firms are still harvesting low-hanging fruit. The winners will be those that turn that fruit into a new orchard.
The brutal part: the transition is a leadership failure before it is a labour-market one
People like to blame the technology. The more accurate culprit, inside organisations, will be leadership failure.
The NBER evidence that executives use AI only around 1.5 hours a week on average should worry every board. If leadership does not use the tools, it cannot redesign work around them. It will default to familiar moves: mandates, pilot programmes, layoffs, procurement, and slogans.
The pattern is consistent across sectors: technical capability is surging; organisational capability is not. The bottleneck is not model performance. It is human systems: decision rights, incentives, workflow design, and the willingness to change how work gets done.
Leadership determines whether the “brutal” is a temporary adjustment or a strategic self-harm.
The transition will be brutal because:
- •The performance bar rises faster than training systems. Your best people will adapt quickly and demand a different environment. Your average people will feel exposed.
- •Entry-level work gets squeezed. Many junior tasks are precisely the kind AI accelerates: drafting, basic analysis, first-line support. This creates a career pipeline problem unless you redesign apprenticeship models.
- •Work intensifies. When output becomes easier, volume often increases. The OECD notes concerns about work intensity and autonomy, alongside productivity benefits.
- •Firms misread early gains as “done”. They cut headcount, then discover that scaling AI requires more integration work, more domain expertise, and more process redesign than they budgeted for.
Two paths: the margin strategy and the capability strategy
The choice can be stated plainly.
The margin strategy
A firm treats AI primarily as a cost-reduction tool. It will likely deliver:
- •quick savings in functions heavy on routine drafting and coordination
- •a burst of investor approval, especially if framed as “AI-driven efficiency”
- •a slower decay in differentiation as competitors raise quality and speed
It also carries hidden costs:
- •loss of institutional knowledge and customer nuance
- •lower capacity to redesign workflows and create new offerings
- •a culture where AI use is feared, hidden, or resisted
The capability strategy
A firm treats AI as a general-purpose amplifier and reinvests the productivity dividend. It will likely deliver:
- •compounding performance gains as skills, tools, and workflows co-evolve
- •faster product cycles, better proposals, and sharper decision-making
- •stronger employee motivation as drudgery declines and mastery rises
BCG's work implies this compounding effect is already visible: future-built companies reinvest, widen the gap, and are “pulling away”.
The capability strategy is harder. It looks slower at first. It also makes you more dangerous over time.
What leaders should actually do: a practical agenda
The thesis of this paper is not “never reduce headcount”. It is “do not confuse headcount reduction with an AI strategy”.
If you want the upside of AI without hollowing out your future, focus on six leadership moves.
1) Declare where you will compete on capability, not just efficiency
Decide which outcomes you want to be “unfairly good at” in two years: customer responsiveness, proposal quality, product discovery speed, risk detection, code throughput, claims handling, supply chain planning.
If you cannot say what you will do better, you will default to doing the same things cheaper. That is not a strategy. It is an accounting exercise. BCG's “future-built” concept is a reminder that AI advantage concentrates in core functions, and that leaders who moved early enjoy outsized benefits.
2) Treat the productivity dividend as capital to reinvest, not cash to extract
The most valuable question is not “how many roles can we remove?” It is “what do we do with the hours we just freed?”
GitHub's research is telling here. Developers reported that AI assistance helped preserve mental energy on repetitive tasks and focus on more satisfying work, not simply finish earlier.
If you capture every hour as cost-out, you train the organisation to hide AI usage or to use it defensively. If you reinvest a portion into higher-value work, you get the flywheel.
3) Redesign workflows end-to-end, not task-by-task
AI makes individual tasks faster. Competitive advantage comes from redesigning the whole workflow.
Noy and Zhang found that AI shifts work away from rough drafting towards idea-generation and editing. That is workflow redesign in miniature.
Do not bolt AI onto the side of existing process maps. Redraw the maps.
4) Build an “AI apprenticeship” model for early-career talent
If AI absorbs junior drafting work, you must replace it with structured learning pathways. Otherwise you will wake up in three years with a shortage of mid-level talent because you stopped training it.
BCG warns that routine tasks within many roles will become automated, changing entry-level positions and requiring new career paths and apprenticeship models.
5) Make managers the adoption engine, not a compliance layer
One reason AI programmes stall is that leaders try to “roll out” tools without changing managerial behaviour. BCG reports that in future-built companies, far more managers role model AI use and incorporate it into decision-making and daily operations than in laggards.
This is a Daniel Pink point as much as a consulting one: people follow meaning and mastery, not mandates. Your managers need to show how AI helps teams win, not how it helps finance cut.
6) Build trust through guardrails, measurement, and honest communication
AI is powerful and imperfect. The HBS “jagged frontier” study found that AI improved speed and quality for tasks within the frontier but reduced correctness for tasks outside it.
So you need:
- •clear rules for where AI can be used
- •human validation processes for high-risk outputs
- •quality metrics, not just throughput metrics
- •an organisational narrative that says: we are using AI to raise capability, and we will help people adapt
The OECD's findings on worker concerns about autonomy and intensity should be treated as operational risks, not as soft HR noise.
Trust is a performance multiplier. Fear is a tax.
A blunt conclusion: margins are easy to copy; capability is hard to catch
AI will make many things cheaper. That does not mean the winners will be the cheapest.
In markets where customers have choice, the winners tend to be those who offer better outcomes: faster decisions, better service, sharper products, richer insight, and more reliable execution. AI makes those outcomes more achievable, but only for organisations willing to redesign work and invest in people.
Cutting headcount to “capture AI savings” is not automatically foolish. But if that is the centre of your AI story, you are optimising for yesterday's game: reducing the cost of producing outputs whose market value is about to fall.
Your competitors, if they keep talent and amplify it, will not just do your work faster. They will do different work, at a different level of quality, at a pace you cannot match with a smaller, thinner organisation.
And the cruellest twist is that, by the time you realise this, the gap may already be compounding.
White paper for senior leaders navigating AI at scale
Warren Paull · Evolve Cubed · January 2026


