It fits when
- Your AI programme has to survive a CFO review.
- You're choosing between SaaS, a fine-tune, self-hosting, or sticking on a cloud API.
- You need Scope 2–aligned numbers, not vendor averages.
A full cost, ROI, and option-comparison model for 3–5 shortlisted use cases. Cloud, self-hosted, hybrid, and platform-native priced side-by-side, as ranges rather than single-point forecasts, with environmental impact modelled.
Intake
Agree the 3–5 use cases in scope, the options to price, and where the numbers live.
Assumption review
Volume estimates, rate cards, depreciation schedules, residency constraints. Surface what is guessed vs measured.
Model build
TCO and ROI per use case per option. Ranges, not points. Sensitivity tables for the inputs that move the answer.
Sensitivity + risk
Which assumptions break the case if they're wrong? Which risks are absorbable? Annotate the model.
Readout
A session with the person who has to approve the spend. Written report + live model handed over.
1–3 weeks
Fixed per use case
Fixed fee per shortlisted use case. Bundle pricing applies beyond three. Confirmed at scope.
An anonymised sample of a past deliverable for this engagement is being prepared. Until it's published here, the clearest picture comes from the methodology page. This service is one productised slice of the same method.
A model with ranges and assumption-testing survives finance review. A single-point forecast doesn't. This is the difference between "we think it'll pay back in 18 months" and "here is the defensible envelope."
Before you model, name the problem.
See the Discovery SprintAfter the model says go, prove it.
See the Rapid Secure AI POCWhen the model says the answer isn't AI.
See the Automation-First Pilot