Dependable outcomes

A repeatable process for measurable outcomes.

Five phases. Five deliverables. One common failure mode per phase, because knowing what usually goes wrong is half of not letting it.

Interactive five-phase diagram

Select a phase for detail

Phase 01 — Assess

Understand the current state before recommending anything.

Stakeholder interviews. A current-state inventory of tools, data, and workflows. Data-sensitivity review covering IP, GDPR, and regulatory scope. Risk-appetite conversation with the commercial owner. Use-case discovery across operations, revenue, and customer experience.

Deliverable: Assessment report covering current state, constraints, and use-case longlist.

Why this phase matters. Skipping assessment is the single biggest predictor of an AI programme that over-promises in month one and under-delivers by month four. This is where constraints become visible before they become blockers.

Common failure mode. Treating assessment as a tooling vendor review. Tools are a month-three decision, not a month-one one.

Phase 02 — Prioritise

Score use cases on impact, effort, and risk.

Each use case from the longlist gets scored on three axes: business impact, implementation effort, and residual risk. Quick wins separate from strategic initiatives. We select the top 3–5 for modelling: enough range to compare, few enough to model properly.

Deliverable: Prioritised roadmap and shortlist for cost/ROI modelling.

Why this phase matters. Most AI programmes die from attempting everything at once. Prioritisation is how we stop that.

Common failure mode. Letting the loudest stakeholder's pet use case skip the scoring. Score anyway. If it really is the right one, it will win the score.

Phase 03 — Model

Cost, ROI, and risk for each shortlisted use case.

Volume estimates grounded in actual workflow data. Solution options priced side-by-side: cloud API, self-hosted, hybrid, and platform-native (e.g. Copilot, Gemini Workspace). Total cost of ownership including infrastructure, licensing, support, and operational overhead. ROI modelled on time saved, revenue uplift, and risk reduction, expressed as ranges rather than single numbers. Risk review including vendor lock-in and data-residency constraints.

Deliverable: Cost/ROI model, option comparison, and justified recommendation per use case.

Why this phase matters. This is where most programmes either earn their board approval or lose it. A model that handles ranges and assumption-testing survives the finance review; a single-point forecast doesn't.

Common failure mode. Underestimating operational cost: support, monitoring, re-training, governance. The sticker price of an API call is usually 30–60% of the real run cost.

Phase 04 — Implement

Phased delivery with clear milestones.

Implementation plan built from the Model phase recommendation. MVP scoped for validation, not impressiveness. Integration with existing systems: CRM, ticketing, identity, storage. Security and governance set up before go-live, not after. User training tied to the specific workflow change, not a generic "AI basics" course. Go-live behind a measurement plan.

Deliverable: Live solution, operational documentation, trained users, and baseline metrics captured.

Why this phase matters. An implementation without governance is an incident waiting to happen. An implementation without training is shelfware within two months.

Common failure mode. Treating go-live as the finish line. It's the start of Measure.

Phase 05 — Measure

KPIs, reporting, iteration.

KPIs defined in Assess, baselined in Implement, now reported against. Dashboards built for the person accountable, not the person curious. Quarterly review against the ROI model: where were we right, where were we wrong, and what does that tell us about the next use case? Case-study-grade documentation of outcomes.

Deliverable: KPI report, proof of ROI, and a case study suitable for internal or external reference.

Why this phase matters. Measurement is what makes the next decision easier. Without it, every AI investment is a fresh argument. With it, the conversation compounds.

Common failure mode. Measuring inputs (tokens consumed, users onboarded) instead of outcomes (hours saved, revenue generated, risk reduced).

Shape of the method

5 phases · 5 deliverables · 1 failure mode each

Every phase ends with something you can hand to a CFO. Every phase has one thing that usually goes wrong — so you can spot it before it costs you.

Full methodology

Phase 01

Assess: Understand the current state before recommending anything.

Stakeholder interviews. A current-state inventory of tools, data, and workflows. Data-sensitivity review covering IP, GDPR, and regulatory scope. Risk-appetite conversation with the commercial owner. Use-case discovery across operations, revenue, and customer experience.

Deliverable: Assessment report covering current state, constraints, and use-case longlist.

Why this phase matters. Skipping assessment is the single biggest predictor of an AI programme that over-promises in month one and under-delivers by month four. This is where constraints become visible before they become blockers.

Common failure mode. Treating assessment as a tooling vendor review. Tools are a month-three decision, not a month-one one.

Phase 02

Prioritise: Score use cases on impact, effort, and risk.

Each use case from the longlist gets scored on three axes: business impact, implementation effort, and residual risk. Quick wins separate from strategic initiatives. We select the top 3–5 for modelling: enough range to compare, few enough to model properly.

Deliverable: Prioritised roadmap and shortlist for cost/ROI modelling.

Why this phase matters. Most AI programmes die from attempting everything at once. Prioritisation is how we stop that.

Common failure mode. Letting the loudest stakeholder's pet use case skip the scoring. Score anyway. If it really is the right one, it will win the score.

Phase 03

Model: Cost, ROI, and risk for each shortlisted use case.

Volume estimates grounded in actual workflow data. Solution options priced side-by-side: cloud API, self-hosted, hybrid, and platform-native (e.g. Copilot, Gemini Workspace). Total cost of ownership including infrastructure, licensing, support, and operational overhead. ROI modelled on time saved, revenue uplift, and risk reduction, expressed as ranges rather than single numbers. Risk review including vendor lock-in and data-residency constraints.

Deliverable: Cost/ROI model, option comparison, and justified recommendation per use case.

Why this phase matters. This is where most programmes either earn their board approval or lose it. A model that handles ranges and assumption-testing survives the finance review; a single-point forecast doesn't.

Common failure mode. Underestimating operational cost: support, monitoring, re-training, governance. The sticker price of an API call is usually 30–60% of the real run cost.

Phase 04

Implement: Phased delivery with clear milestones.

Implementation plan built from the Model phase recommendation. MVP scoped for validation, not impressiveness. Integration with existing systems: CRM, ticketing, identity, storage. Security and governance set up before go-live, not after. User training tied to the specific workflow change, not a generic "AI basics" course. Go-live behind a measurement plan.

Deliverable: Live solution, operational documentation, trained users, and baseline metrics captured.

Why this phase matters. An implementation without governance is an incident waiting to happen. An implementation without training is shelfware within two months.

Common failure mode. Treating go-live as the finish line. It's the start of Measure.

Phase 05

Measure: KPIs, reporting, iteration.

KPIs defined in Assess, baselined in Implement, now reported against. Dashboards built for the person accountable, not the person curious. Quarterly review against the ROI model: where were we right, where were we wrong, and what does that tell us about the next use case? Case-study-grade documentation of outcomes.

Deliverable: KPI report, proof of ROI, and a case study suitable for internal or external reference.

Why this phase matters. Measurement is what makes the next decision easier. Without it, every AI investment is a fresh argument. With it, the conversation compounds.

Common failure mode. Measuring inputs (tokens consumed, users onboarded) instead of outcomes (hours saved, revenue generated, risk reduced).

Who this works for

Who this method is for

This method is built for teams where AI spend needs to survive a finance review, a compliance review, and a board meeting. If those aren't constraints yet, a lighter approach exists: start with the cost calculator and come back.