Banks are investing in co-pilots, assistants and workflow automation. The constraint is not the tooling. It is whether the management layer can use approved tools responsibly, redesign real work and lead safe change inside regulated guardrails. Evolve Cubed builds that capability — across all four leadership levels, from first-line managers through to ExCo.
Most banks have licensed co-pilots, published responsible AI policies and opened digital-workplace access to managers. The gap that matters now is whether the management layer has been equipped to use those tools with confidence, redesign workflows around them and maintain governance visibility inside a regulated environment.
Managers are being asked simultaneously to lead people well, adopt approved tools responsibly and sponsor workflow change. Most organisations address that through communications campaigns or generic training. Neither produces the behaviour change that boards and regulators are asking for.
The hardest gap in regulated banking is not the tool or the policy. It is the manager layer — the point where AI adoption either lands or stalls.
Execution, direction, velocity. Effective in stable environments — but AI often makes pure command brittle and slow. Banks relying on Driver leadership alone find that adoption stalls at the manager layer.
Network, collaboration, coordination. Necessary — but not enough when the system is moving faster than people can align manually. Connector-heavy organisations struggle to embed consistent AI practice at scale.
Designs the conditions, decision rights and guardrails so the system can adapt without constant executive intervention. This is the mode banks need to build — and the mode this programme develops.
This is not an AI academy. Not a comms campaign. Not a transformation project. It is an operating model shift for the manager layer — built around leadership behaviour, governed AI use and workflow redesign, running simultaneously across all four leadership levels inside a single coherent architecture.
Managers develop the judgement, empowerment, coaching, delegation and cross-boundary execution skills to lead people through AI-enabled change. The programme works at the behaviour level — not awareness, not compliance, not enthusiasm.
Approved tools, safe-use logic, decision discipline and escalation boundaries built into daily operating rhythm. Responsible AI treated as a permissioned operating model — not a generic enthusiasm problem, not a compliance module added at the end.
Safe experiments, cleaner hand-offs, better routines and measurable workflow proof built into lived work — not homework. Teams identify and redesign the specific process points where AI adoption either compounds friction or removes it.
Managers who make faster, cleaner decisions without increasing escalation volume.
A governance model that lets AI adoption accelerate without losing operational control.
Measurable evidence of sustained change — not just completion rates.
A development programme that builds observable leadership behaviour, not just AI literacy.
Pre/post measurement designed to satisfy procurement, risk and ExCo scrutiny.
One consistent capability standard across all four leadership levels.
Managers who use approved tools confidently within the guardrail model.
Documented workflow changes — before/after states against real friction points.
Adoption that holds beyond the programme window.
Every participant moves through the same five-module arc. Content, application and participant outputs are calibrated to leadership level. Work-embedded transfer is built into the design — not added as a follow-up.
Confidence and capability diagnostic. Individual and cohort heat map. Personal leadership audit.
Leadership identity in AI context. Team trust and delegation under uncertainty. Output: personal leadership brief.
Higher-quality decisions using approved tools. Escalation discipline. AI literacy in practice. Output: decision protocol.
Safe experiment design. Workflow friction mapping. Sponsor a workflow proof with your team. Output: workflow proof brief.
90-day operational AI leadership plan. Peer review. Sponsor presentation. Output: signed-off 90-day plan.
Transfer mechanism: stable pods of 4–6 · leader tasks between modules · safe experiments with real teams · manager check-ins at module gates
The guardrail design is one of the programme's strongest commercial differentiators in banking. The goal is to make safe AI easy, and unsafe AI hard. Managers leave with a practical, calibrated model for which tools to use, in which context, without constant escalation.
Approved tools in approved contexts. Clear decision rights. No escalation required. Managers operate with autonomy inside this zone and are expected to.
Higher-stakes contexts requiring a named decision-maker, documented rationale and visibility to a line sponsor. Use is appropriate — but not without a record.
Contexts where AI use is either prohibited, not yet approved or carries regulatory, privacy or conduct risk that requires formal sign-off before proceeding.
The programme operates across four distinct leadership tiers — each with a configured curriculum, delivery format and governance cadence appropriate to that level. All four tiers share a single programme model, ensuring a common language, operating framework and measurable outcome standard across the entire organisation.
This is not four separate programmes run in parallel. It is one architecture, deliberately configured for four contexts. That distinction matters for coordination, governance visibility, and sustainable culture change.
The programme is weighted deliberately toward applied practice inside real operating contexts — not classroom instruction. Every module connects to live decisions and active workflow moments.
From individual contributor logic to delegation, coaching and decision clarity. Building the foundational leadership capability that makes AI adoption possible at team level.
Lead through others at pace. Improve hand-offs, reduce escalation bottlenecks and embed operational simplification across AI-enabled processes.
Strengthen cross-functional trade-offs, function-level sponsorship and operating-model leadership. Accountable for the conditions that allow AI adoption to scale below them.
Set the guardrails, sponsor adoption at enterprise scale and shape the organisational narrative on pace, control and outcomes. Board-level accountability for AI risk and capability.
We baseline before we begin and we track specific behaviour change through the programme. Reporting is designed for programme sponsors who need evidence, not anecdote.
We report contribution, not false ROI. The impact model is designed to be defensible to risk teams and FCA-regulated governance frameworks.
Pre/post diagnostic movement on the leadership confidence and AI literacy baseline. Measured at individual, cohort and level.
Observable shifts in leadership behaviour as reported by line managers, peers and direct reports at module gate points.
Tracked, self-reported and sponsor-verified uptake of approved tools in daily operating practice inside the guardrail model.
Tangible outputs from safe experiments: documented workflow changes with named before/after states against specific friction points.
The programme is typically commissioned by a cross-functional sponsor group. Each role brings a distinct brief. All three need to be aligned before an engagement proceeds.
Managers who make faster, cleaner decisions without increasing escalation volume.
A governance model that lets AI adoption accelerate without losing operational control.
Measurable evidence of sustained change — not just completion rates.
A development programme that builds observable leadership behaviour, not just AI literacy.
Pre/post measurement designed to satisfy procurement, risk and ExCo scrutiny.
One consistent capability standard across all four leadership levels.
Managers who use approved tools confidently within the guardrail model.
Documented workflow changes — before/after states against real friction points.
Adoption that holds beyond the programme window.
Every engagement starts with a free conversation. What happens next depends on what your organisation actually needs — not on what a sales process prescribes. We do not propose rollouts before we understand the problem.
A focused conversation with COO, CHRO or AI/Digital Workplace leadership to examine how AI adoption is landing inside the organisation — and what the management-layer gap looks like in practice. Based on the T1 productized briefing. Not a pitch. A diagnostic conversation.
60–90 minutes · On-site or virtual · Complimentary
A shared view of the operating gap and a clear recommendation on whether a diagnostic is warranted.
A structured process aligned to the M1 productized diagnostic. Produces a specific, evidenced picture of where the management layer stands — capability, confidence, workflow friction and guardrail readiness — across all four leadership levels. Output is a prioritised set of recommendations, not a proposal.
4–6 weeks · Structured interviews, heat mapping and cohort analysis · Cross-functional sponsor team
A written Leadership Readiness Report with specific recommendations on scope, sequencing and entry cohorts.
Two carefully selected cohorts — typically cross-functional and drawn from two leadership levels — working through the full five-stage arc before any organisation-wide commitment. The M1 diagnostic is embedded into the pilot design. Measurable behaviour change under real operating conditions, with a hard decision point at the end.
12–16 weeks · Two cohorts · Cross-functional programme sponsor · Pre/post behavioural measurement
Evidenced behaviour change data, cohort outcome report and a specific recommendation on rollout design.
Scaling one coherent programme architecture across all four leadership levels and multiple business units simultaneously. Configured for the bank's governance model, performance reporting requirements and operational rhythm. One programme team, one measurement framework, one outcome standard.
9–18 months · All four leadership levels · Bank-wide sponsor coalition · Quarterly executive reporting
Sustained, measurable behaviour change embedded into the organisation's operating model — not dependent on continued external support.
For banks seeking internal ownership at scale. We train and certify internal facilitators to deliver the programme independently, maintaining fidelity to the guardrail model and impact measurement framework. This is not a licence-and-leave arrangement — it is a structured transition to self-sufficiency.
Concurrent with or post-rollout · Internal facilitator cohort · Certification assessment · Ongoing quality assurance
Certified internal delivery capability with programme fidelity maintained over time.
Evolve Cubed was founded by Angus Morrison. His background spans British Army Intelligence (adversarial decision-making under constraint) and FTSE100 commercial transformation (scaling leadership capability in complex, regulated organisations). Nothing about the programme is adapted from generic consulting. It is built from operating-context first.
The governance model, guardrail framework and four-level architecture are designed for FCA-regulated conditions from the outset. Not bolted on after the sale.
Meet the teamAngus Morrison's background in adversarial decision-making (Army Intelligence) and FTSE100 transformation means the programme is designed from lived operating contexts, not adapted from generic frameworks.
A single programme model runs simultaneously across new people leaders, established managers, heads of function and senior leaders. Common language. Common measurement. No fragmentation.
FCA-regulated operating conditions are built into the guardrail model, decision rights and escalation architecture from the outset. Compliance is not a module. It is the operating framework.
Capability shift, behaviour change, approved-tool adoption and workflow proof are tracked from day one. Reporting is built for programme sponsors who need to show results to procurement, risk and ExCo.
We do not propose enterprise rollouts before we have seen diagnostic evidence that the management layer is ready to benefit from one. And we do not take on engagements where the conditions for genuine behaviour change are not in place. This is not a standard consulting posture. It is how we protect the outcome of every programme we run.
We do not run AI tools training. We build the leadership capability that makes tool adoption sustainable.
We do not propose rollouts before diagnostics. The T1 briefing exists to protect both sides from premature commitment.
We do not run generic leadership programmes with AI content added. The programme is built from this operating context, not adapted to it.
We do not work with organisations where the conditions for genuine behaviour change are not in place. We will say so in the briefing.
We do not hand senior clients to account managers. Angus Morrison leads all ExCo and board-level engagement directly.
We will tell you whether your organisation has the conditions for this to work. If it does not, we will say so — and tell you what would need to change first. The T1 briefing is complimentary, carries no commitment, and takes 60 to 90 minutes.
COO · CHRO · AI & Digital Workplace · Board Advisor