The transformation playbook

Stabilise the core. Prioritise the journeys. Prove the value.

How we take organisations from fragmented legacy operations to AI-enabled customer experience — without losing control of value, risk, or adoption along the way.

Business-led

Every engagement starts with outcomes, economics, and customer value — not a technology features discussion. The platform follows the design; the design follows the problem.

AI-ready

AI is assessed from the first diagnostic and only deployed when it is trusted, measurable, and safe to scale. We don't run AI pilots for their own sake.

Iterative by design

Work is delivered in waves. Each wave proves value before the next is funded. Learning and re-prioritisation are structural features, not exceptions.

A model built around how decisions actually get made.

In practice, these phases overlap and repeat as the programme learns. What stays constant is the logic: no phase begins until its predecessor has produced a decision worth building on.

PHASE 01

Set direction

Strategic alignment

We establish why the programme exists, what it will change, and how success will be judged — before any delivery work begins.

  • Transformation charter and value thesis
  • Governance structure and sponsorship
  • Scope, guardrails, and AI policy boundaries
  • Executive alignment on top objectives

PHASE 02

Diagnose

Current-state baseline

A structured assessment across journeys, operations, technology, people, data, and AI readiness. Evidence-based. No assumptions carried forward.

  • Journey and channel performance audit
  • Technology and integration landscape review
  • Data quality and governance assessment
  • AI readiness scoring across use cases

PHASE 03

Design

Future-state blueprint

Target journeys, target operating model, target architecture, and a prioritised use-case portfolio — linked into a single coherent picture.

  • Redesigned customer journeys
  • Target operating model
  • Prioritised AI use-case portfolio
  • Business case with ROI and TCO view

PHASE 04

Plan

Wave roadmap

Ambition becomes a sequenced programme with milestones, ownership, funding view, and continuity safeguards. Nothing is left to assumption.

  • Wave-based delivery roadmap
  • Change and adoption plan
  • Risk and continuity controls
  • Success measures agreed and baselined

PHASE 05

Deliver and prove

Pilot and scale decision

The first wave is configured and delivered. Pilot results are reviewed against the value case. Evidence — not optimism — drives the decision to scale.

  • Configured and tested solution
  • Adoption evidence and gap actions
  • Pilot results against agreed KPIs
  • Scale, redesign, or stop decision

PHASE 06

Scale and improve

Continuous value realisation

The programme moves to business-as-usual, but improvement doesn't stop. Value is tracked, the backlog is managed, and the next wave is prioritised on evidence.

  • Value dashboard and BAU rhythm
  • Continuous improvement backlog
  • Quarterly executive reviews
  • Next-wave priorities and funding decisions

The pace is set by readiness, not by a vendor timeline.

Anchor, Accelerate, Advance describes the maturity arc we use to set expectations, sequence investment, and measure progress. Every organisation enters at a different point; the model adapts to meet them there.

Stage one

Anchor

Foundation & stabilisation

  • Cloud migration and core platform live
  • Baseline CX measurement established
  • Core contact flows configured and tested
  • Teams trained, governance in place
  • Data flowing reliably to reporting

Consulting focus

Mobilisation, configuration oversight, change management, go-live readiness. Stable before smart.

Stage two

Accelerate

Optimisation & control

  • Channel and routing performance improved
  • Operating model tightened around data
  • AI pilots scoped and deployed
  • Agent tooling and knowledge management optimised
  • Efficiency gains measured and compounding

Consulting focus

Performance advisory, AI use case design, continuous improvement governance. Value building on a solid base.

Stage three

Advance

Differentiation & innovation

  • Predictive analytics driving proactive outreach
  • Cross-channel orchestration at scale
  • AI-native operating model embedded
  • Autonomous quality management live
  • Competitive advantage measurable

Consulting focus

Innovation strategy, retained optimisation, next-wave investment prioritisation. Sustained differentiation.

Six design rules. No exceptions.

Best-in-class transformation is business-led, journey-focused, and explicit about adoption, control, and value. The principles below are not aspirational — they are structural requirements for how we work.

Start with outcomes

The programme is anchored in customer value, employee effectiveness, efficiency, risk reduction, and business economics before feature scope is debated. Technology is a means, not the objective.

Design around journeys

Work is organised around customer journeys, contact reasons, and value streams — so operating model, architecture, and measurement stay connected throughout delivery and beyond.

Standardise the core

Standard cloud patterns are used wherever they are good enough. Bespoke effort is targeted only at the journeys and decisions that create genuine competitive advantage.

Build adoption early

Change, training, sponsorship, and communications are delivery work from day one — not a late-stage support task. If people aren't using it, it hasn't been delivered.

Make AI practical

AI is prioritised where data, knowledge, governance, and user behaviour are ready. We avoid novelty use cases that don't have a measurable business case and a clear owner.

Measure continuously

Baselines, targets, adoption, delivery health, and value realisation are tracked during and after go-live. The programme keeps improving because the data says so — not because the project plan says it should.

What this is not: A like-for-like migration. An AI science experiment. An IT-only governance model. A project that ends when the platform goes live. If any of those descriptions fit your current engagement, we should talk.

Every AI idea should pass six questions.

If the answer to any of the first five is no, the right response is to redesign, defer, or add controls — not to proceed and hope. This is the test we apply at the start of every engagement, and the discipline we hold throughout.

1

Is there a clear business outcome?

The use case is tied to a measurable KPI — containment rate, handle time, quality score, productivity, conversion, or cost to serve. "Interesting" is not a business case.

2

Is the process ready?

The workflow is stable enough to augment or automate and has a clear, named owner. AI applied to a broken process produces a faster broken process.

3

Are the data and knowledge ready?

The information the AI needs is available, curated, current, and governed. Poor data produces confident wrong answers — which is worse than no answer.

4

Can humans stay in control?

There is a clear escalation path, override model, auditability trail, and policy boundary. Any AI deployment that removes human control is a governance risk, not a feature.

5

Can the organisation adopt it?

Users understand how to work alongside the capability, trust it, and know how to improve it over time. Adoption is a design problem, not a training afterthought.

6

Does it scale safely and economically?

The design can be monitored, financially justified, and expanded without creating hidden risk or compounding technical debt. Scale should be a decision, not a surprise.

Good first-wave AI bets

  • Agent assist and interaction summarisation
  • Knowledge surfacing and real-time guidance
  • Intent detection and routing optimisation
  • Quality analytics and supervisor insight
  • High-volume self-service with clear escalation paths

Avoid for now

  • Poorly defined use cases with no KPI owner
  • Use cases relying on low-quality or fragmented data
  • High-risk decisions with weak controls or no override
  • Novelty features that look impressive but don't change outcomes
  • Use cases the frontline doesn't trust or understand

A small set of executive-grade outputs.

Each deliverable exists to support a decision, a control point, or a value review — not to fill a document register. You will own everything we produce.

01

Transformation charter and value thesis

Defines why the programme exists, what it will change, and how success will be judged. The document that executive sponsors can anchor every subsequent decision to.

02

Current-state and AI readiness diagnostic

A fact base on pain points, performance gaps, risk exposure, and readiness to adopt AI-enabled ways of working. The foundation everything else is built on.

03

Prioritised use-case portfolio and business case

Shows where value sits, what should be done first, and — critically — where not to spend yet. Includes ROI modelling and TCO view.

04

Target-state blueprint

Links target journeys, operating model, architecture, controls, and change impacts into one coherent view. The design authority for every delivery decision that follows.

05

Wave roadmap, risk plan, and change plan

Turns ambition into a sequenced programme with milestones, named ownership, and continuity safeguards. Includes the change management and adoption plan for each wave.

06

Value dashboard and quarterly review pack

Tracks realised value, adoption, quality, AI performance, and next-wave decisions after go-live. The ongoing proof that the investment is working.

Four metric families. One question each.

The scorecard is tailored to each client, but these four families almost always matter. Success criteria are agreed at the start of every engagement and revisited at every value review.

Business value

Is the programme paying back?

  • Cost to serve
  • Automation rate
  • Agent productivity
  • Conversion and revenue impact
  • TCO and ROI

Experience

Are customers and employees feeling it?

  • CSAT and NPS
  • First contact resolution
  • Customer effort score
  • Wait time and service levels
  • Journey performance by segment

Adoption and operations

Is the organisation using it well?

  • Feature adoption by team
  • Training completion and readiness
  • Process adherence
  • Supervisor usage and engagement
  • Backlog health and stability

AI quality and trust

Is AI safe, trusted, and scalable?

  • Answer quality and accuracy
  • Fallback and escalation rates
  • Override frequency
  • Compliance exceptions
  • Drift indicators over time

What the first 90 days look like.

The objective in the first 90 days is not to deliver transformation — it is to build the foundation on which transformation can be trusted. That means evidence first, ambition second.

Days 1 – 30

Align and diagnose

  • Confirm executive sponsorship and business ownership
  • Agree top objectives and the KPIs that define success
  • Launch the diagnostic and gather evidence
  • Establish access to data, stakeholders, and process owners

Days 31 – 60

Shape and prioritise

  • Define the target state across journeys and operations
  • Prioritise AI use cases by readiness and value
  • Test AI viability against the six-question framework
  • Draft the value case with realistic modelling

Days 61 – 90

Approve and launch

  • Executive approval of roadmap, investment, and first wave
  • Governance rhythm established and running
  • First wave or pilot launched with clear success criteria
  • Value dashboard baselined and owned

Ready to start with the Diagnostic?

Get in touch