Back to Insights
Evolve Cubed Research

Leadership as
Level Design

Why the AI era turns executives into game designers, not commanders or conveners

Warren Paull·February 2026·22 min read
An intricate tabletop strategy game viewed from above, with glowing pathways through miniature terrain
Leadership in the AI era is not about commanding players. It is about designing a world worth playing in.
“The game is not the experience. The game enables the experience.”
Jesse Schell, The Art of Game Design
Executive Summary

For a century, leadership has oscillated between two dominant modes. The first is command and control: set direction, allocate work, inspect progress. The second is the softer choreography of the matrix: persuade, align, convene, negotiate across functions. Both are still useful. Neither is sufficient.

AI compresses decision cycles, decentralises authority, and makes systems harder to “see” from the top. Evolve Cubed describes the resulting leadership breakage plainly: under pressure, leaders either collapse back into command or stall in coordination, and neither holds when AI overwhelms networks and timescales.

A more accurate metaphor is emerging. Modern leaders increasingly resemble video game designers. Game designers do not “manage” player behaviour by issuing instructions. They design environments and decision architecture: rules, incentives, feedback loops, assets, constraints, and guardrails. Within that designed world, players generate their own strategies and experiences at speed and at scale.

Gartner predicts that by 2028, 15% of day-to-day work decisions will be made autonomously by agentic AI, and 33% of enterprise software applications will include it. Yet Gartner also forecasts that over 40% of agentic AI projects will be cancelled by end-2027 because costs rise, value is unclear, or risk controls are inadequate.

That tension is exactly why leadership must evolve into environment design. Not looser control, but better-designed control. Not more meetings, but sharper rules of play. Not micromanagement, but world-building with rigorous guardrails.

The old metaphors are breaking

Command and control: reliable, until the map changes faster than the orders

Command and control works when work is repeatable, variance is low, and the centre can see enough of the battlefield to issue meaningful instructions. AI changes each of those assumptions.

When AI tools raise output ceilings and reduce execution friction, the bottleneck shifts from doing the work to deciding what to do, how to do it, and how to govern it. Evolve Cubed's framing is stark: technical capability is surging, organisational capability is not, and the widening gap is fundamentally a leadership problem.

AI also changes the tempo. Decisions that once took weeks of analysis now happen in hours because the analysis is cheap. In that environment, a leader who tries to remain the hub becomes the constraint.

The matrix: sophisticated persuasion, until coordination becomes the tax on speed

Matrix leadership assumed the key challenge was alignment across silos: getting smart people to agree. It produced a repertoire of influence skills, stakeholder management, and cross-functional forums.

AI stresses that model too. Evolve Cubed notes that AI floods the network and human coordination becomes the bottleneck. In other words, the matrix can become a machine for producing meetings at exactly the moment the organisation needs momentum.

The uncomfortable conclusion is that the “right” answer is less often agreed in advance. Value comes from rapid exploration, safe experimentation, and local adaptation. Traditional influence still matters, but it is no longer the primary act.

Why game design is the right leadership analogy

A well-designed game is not a script. It is a system.

Players do not succeed because the designer told them exactly what to do. They succeed because the designer created a world in which the next best move is discoverable, the incentives are intelligible, and the feedback is immediate enough to learn.

This is why Sid Meier's famous line has endured: a good game is a series of interesting decisions. The phrase matters because it frames design as decision architecture: the craft is in shaping the decisions that players face, not in dictating their actions.

Schell goes further, and it is the hinge for modern leadership: the game is not the experience; the game enables the experience. Players create their own stories inside the structure.

That is exactly what leaders now need to do. Not to design every move, but to design the environment in which moves are made.

Hands arranging translucent geometric pieces on a light table, creating an interconnected system
The craft is in shaping decisions, not dictating actions.

Choice architecture: behavioural science already has the language for this

In behavioural economics, “choice architecture” means organising the context in which people make decisions. A nudge, in Thaler and Sunstein's definition, is any aspect of that choice architecture that alters behaviour predictably without forbidding options or significantly changing incentives.

Leadership-as-game-design is simply choice architecture applied to organisations at scale, with three crucial upgrades:

If you accept that framing, the leader's job changes from “decide and direct” to “design the decision environment”.

Evolve Cubed describes this destination as the Orchestrator: a leader who creates conditions for humans, processes, and AI to self-organise within defined guardrails. The game designer is the most vivid way to understand what that really entails.

AI introduces a new class of actor: NPCs that can act

The video game metaphor becomes more than cute when you consider agentic AI.

In games, NPCs are non-player characters: entities with behaviours, goals, and constraints defined by the system. In organisations, AI agents are becoming exactly that: semi-autonomous actors operating inside workflows, software systems, and data environments.

Gartner's view is that many “agentic” claims are hype, including “agent washing”, and that only a small fraction of vendors have meaningful agentic capabilities. Yet the direction of travel is clear enough that Gartner expects substantial autonomy in work decisions and wide embedding in enterprise software by 2028.

Crucially, Gartner links failure not merely to technical immaturity but to cost, business value, and risk controls. That is a design problem. A poorly designed world produces chaotic NPC behaviour, and expensive clean-up.

Chess board with a mix of traditional pieces and small robot figures, representing human and AI actors
When organisations include both human players and AI agents, leadership becomes world design.

Two more forces accelerate this shift:

Put plainly: leaders are being asked to run worlds containing both players and semi-autonomous entities. That requires design thinking, not only management technique.

Organisations are complex adaptive systems, not machines

The strongest academic support for the game-designer model comes from complexity science.

The NHS, analysed explicitly as a complex adaptive system, is described as a network of interacting agents where control is dispersed, with no single point of command, and behaviours can emerge through adaptation and learning. The paper's conclusion is direct: leaders must understand what drives the system and how to create conditions for creativity and positive cultures to flourish.

Leadership research on emergence similarly argues that transformation is often the product of dynamic interactions rather than directives from a central decider, and identifies “conditions” leaders can create to enable emergence and stabilise it through constraints and feedback.

This is the hidden logic behind the game metaphor. Games are complex systems designed to produce emergent outcomes. You cannot “command” emergence into existence. You can only design the conditions in which it becomes likely and valuable.

In game design, “emergent gameplay” is explicitly defined as situations where players can do things that arise from the mechanics but were not planned by designers. It works because there is both freedom and structure, autonomy within rules.

If you replace “players” with “employees” and “mechanics” with “process, data, and tools”, you have an eerily accurate description of what AI-enabled organisations need to become.

The leader-as-game-designer model

To make this operational, we need a map. The game designer model breaks leadership into three design layers:

A translation table: game systems to organisational systems
Game ElementWhat It Does in GamesOrganisational Equivalent
The "physics engine"Defines what is possible, stable, what breaksGovernance, risk controls, compliance, auditability
The "level" or worldProvides terrain, resources, constraintsOperating model, workflows, platforms, tooling
Quests and objectivesGive direction without prescribing exact movesOutcome-based goals, decision rights, clarity of intent
Mechanics and rulesCreate repeatable interactionsStandard ways of working, playbooks, APIs, automation
Economy and rewardsShape behaviour through incentivesMetrics, recognition, career incentives, budgeting
NPC behaviourAdds capability and friction, creates dynamicsAI agents, copilots, automation bots, decision support
Telemetry and live opsObserve behaviour, patch, rebalanceAdoption analytics, model monitoring, improvement
Anti-cheatPrevents exploits, keeps play fair and safeGuardrails, access controls, red teaming, incident response

This table is intentionally blunt. The point is not to trivialise leadership. It is to clarify where effort should go. Most executives spend too long debating the “story” and too little time building a coherent “physics engine”.

Six principles of leadership as environment design

1) Start with the experience you want to enable, not the behaviour you want to control

Schell's distinction matters here: the system enables the experience, it does not dictate it. Leaders should define the experience they want for customers and employees, then design the conditions that make it likely.

This is where many AI programmes go wrong. They start with tools. Game designers start with player experience and only then decide mechanics.

In organisational terms, this means being explicit about:

2) Design “interesting decisions” at the edge

Meier's idea is not about making work entertaining. It is about making decisions meaningful: frequent enough to matter, bounded enough to learn from, and consequential enough to create ownership.

AI pushes decisions to the edge because frontline staff and product teams now have access to analysis, drafts, and prototypes that once lived only at the centre. If leadership does not redesign decision rights and guardrails, the edge either freezes (fear and confusion) or runs wild (inconsistent, unsafe, hard to audit).

The aim is to create “decision architecture” that lets people move quickly while staying accountable. Evolve Cubed's framing is explicit: the leaders who win will be responsible for decision architecture that enables people and AI to self-organise with clarity, trust, and momentum.

3) Build guardrails that make speed safe

Guardrails are not bureaucracy. They are the physics engine.

This is where regulation and standards matter. The European Commission's timeline for the EU AI Act is unambiguous about staged applicability through 2025 to 2027, and full application from August 2026 for much of the regime. ISO/IEC 42001 is positioned as the first AI management system standard. In parallel, NIST's AI Risk Management Framework is designed to help organisations incorporate trustworthiness considerations across design, development, and deployment.

Translated into leadership practice, guardrails should be:

This is the difference between “We trust you” as a slogan and “We designed a system where trust is rational.”

4) Design for emergence, then stabilise what works

Complex adaptive systems do not behave like clockwork. The NHS paper makes the point: behaviours emerge through self-organisation, with dispersed control. The ScienceDirect research on emergence describes leadership as enabling the conditions under which new order arises, then stabilising through local constraints and feedback.

Game designers understand this intuitively. They expect players to surprise them. Good organisational leaders must now expect the same, especially when AI expands the space of possible solutions.

So, leaders should treat early AI-enabled work as “playtesting”:

The trap is to confuse emergence with anarchy. The goal is not maximum freedom. It is productive freedom: creativity inside guardrails.

5) Instrument everything and run the organisation like a live service

Game designers do not ship once and walk away. They watch how players behave, where they get stuck, what exploits appear, and what content generates retention. Then they rebalance.

AI-enabled organisations need the same cadence. This is not optional once AI agents are taking actions in workflows. If you cannot answer the following questions with data, you do not have a designed world, you have a rumour:

This is also where Gartner's warning about agentic AI failures becomes practical. “Inadequate risk controls” often means a lack of observability and enforceable boundaries, not merely a missing policy.

6) Preserve humanity: motivation is a design constraint, not an HR slogan

Daniel Pink's enduring contribution was to re-centre modern motivation around autonomy, mastery, and purpose. Underneath that sits a strong research tradition: self-determination theory argues that conditions supporting autonomy, competence, and relatedness foster higher-quality motivation and engagement.

In the game metaphor, this is obvious. A game where players have no autonomy is not a game, it is a movie. A game where players cannot develop competence is not engaging, it is frustrating. A game with no social fabric quickly becomes lonely.

AI changes the tools. It does not change the basic psychology. Leaders who design an AI-enabled world that strips autonomy, degrades competence, or destroys belonging will get compliance at best and disengagement at worst. They will also get the most dangerous outcome: quiet sabotage via workarounds.

Designing NPCs: how to think about AI agents as organisational actors

If AI agents are NPCs, leaders need to stop treating them like software features. They are actors with permissions, memory (sometimes), goals (explicit or implied), the capacity to take action, and the capacity to surprise.

Protocols like MCP exist precisely because organisations want AI systems to connect to data and tools in a standardised way. That increases capability, and therefore risk. It also changes the leadership task: the world must be designed so that connected agents behave safely by default.

A practical way to design NPCs in business is to define four elements:

This sounds like governance. It is governance. But it is also design. It determines whether the agent is a helpful companion, a reckless intern, or a latent liability.

A leadership playbook: from metaphor to operating rhythm

A metaphor is only useful if it changes behaviour. Here is what “leader as game designer” looks like as an operating rhythm.

Step 1: Write the “design brief” in plain English

Game studios begin with a creative brief. Leaders need an operational equivalent:

If you cannot state this clearly, you will default to tool adoption theatre.

Step 2: Build the physics engine first

Before “content”, build the constraints: data governance and access controls, logging and audit trails, model usage policies tied to workflows, incident response routes, and risk reviews proportionate to impact.

External regimes are making this unavoidable. The EU AI Act timeline alone should force leaders to treat this as strategic infrastructure, not compliance admin.

Step 3: Design quests, not tasks

Stop handing out tasks that can be solved by templates and AI. Design quests: clearly defined outcomes, explicit constraints, clear success criteria, room for multiple solutions. This is how you harness emergence without losing coherence.

Step 4: Run playtests with real telemetry, not anecdote

Pick a bounded domain. Instrument it. Watch actual behaviour. Then iterate.

The key is to treat the first versions as prototypes, not as proof of competence. Too many organisations treat an early demo as a finish line. Gartner's cancellation forecast is a reminder that the real challenge begins when you move from proof of concept to scaled production with risk controls.

Step 5: Stabilise winning patterns into the operating model

When something works, make it normal: codify the workflow, train it, adjust incentives, embed it in performance systems, and improve it continuously. This is what Evolve Cubed calls Strategic Integration: locking in gains so progress does not fade.

Step 6: Protect psychological safety without losing standards

Speed requires learning. Learning requires candour. Evolve Cubed is explicit that transformation at pace depends on trust and psychological safety.

The trap is to confuse psychological safety with comfort. In high-performing teams it means people can speak up, admit mistakes, and surface risks early, so the system gets better faster. That is the cultural equivalent of a good bug-reporting pipeline in a live game.

The uncomfortable truth: this is harder than it sounds

Many leaders like the idea of empowerment. Fewer like the disciplines required to make empowerment safe.

Designing environments is not “hands off”. It is hands on at the right layer: ruthless clarity about intent, explicit rules of engagement, real-time telemetry, rapid rebalancing, a mature approach to risk and compliance, the willingness to let teams surprise you, and the spine to intervene when boundaries are breached.

This is why Evolve Cubed's model emphasises a shift from Driver to Connector to Orchestrator, and why the Orchestrator is defined by creating conditions for emergence within guardrails.

Command and control is cognitively simpler. The matrix is socially familiar. Environment design is neither. It is strategic, systemic, and deeply human.

Conclusion: the next decade belongs to world-builders

AI is not merely a new tool. It is a new class of actor, and a new accelerator of organisational complexity. When authority decentralises, decisions compress, and systems become opaque, the leader cannot remain the chief decision-maker.

The leadership advantage will come from those who can:

A game designer does not micromanage the player. They build a world worth playing in. That is now the job.

This paper draws on publicly available research from Gartner, the European Commission, NIST, ISO, BehavioralEconomics.com, Harvard NudgeU, the National Library of Medicine, ScienceDirect, Game Design Skills, Anthropic, and Evolve Cubed. All citations link to source material. Published February 2026.

Talk to UsTake the Diagnostic