Per-agent power telemetry
Live wattage per node, with a CPU / RAM / GPU breakdown. Method-accuracy badges so the auditor (or you) can see at a glance where the number came from: green for IPMI/RAPL, amber for mixed methods, red for estimation.
Most AI sustainability claims live or die on the same question: where did the number come from? This page is the practical answer: the methods, the factors, and the tooling that produces evidence an auditor can trace.
For AI workloads running on infrastructure you control, every link in the carbon chain (tokens, compute, kilowatt-hours, grid factor, CO₂e, cost) can be measured per workflow. Not modelled. Not estimated from a vendor's median prompt. Measured, with the meter readings, runtime logs, and emission factors all sitting in one place.
For AI workloads running on someone else's infrastructure, four of those six steps are a private vendor metric. You can produce a plausible-looking row in a sustainability report by multiplying the bookends with a published average. You can't trace it.
Vendor averages are not primary data. The GHG Protocol is explicit: primary data from the reporting entity's own operations takes precedence over supplier averages. The full version of this argument is in "The ESG story cloud AI can't tell". This page is the toolkit underneath it.
The reporting entity's job is to log every link, with a timestamp and a method.
Every step has a method. Every method has an accuracy band. Both get reported.
Eight ways to get a watt reading off a server. They are not equivalent. The accuracy label travels with the number.
| Method | Measures | Accuracy |
|---|---|---|
| IPMI | Whole-system draw, baseboard sensor | Primary, vendor-calibrated |
| Intel RAPL | CPU package + DRAM | Primary, silicon counter |
| Turbostat | Per-core CPU package power | Primary, derived from RAPL |
| HWMON / lm-sensors | Board sensors, fan, voltage rails | Secondary; coverage varies by board |
| LibreHardwareMonitor | Windows-side cross-vendor sensors | Secondary; same caveat |
| Powermetrics (macOS) | SoC + GPU on Apple Silicon | Primary, Apple-published counter |
| GPU sensor queries | NVIDIA / AMD board power, util %, VRAM | Primary, vendor counter |
| Estimation | Calibrated formula (e.g. 40 W base + 10 W/core × CPU% + 3 W per 8 GB RAM) | Tertiary; flagged in every report |
VM power is allocated from the host proportionally to CPU usage. The allocation method is logged alongside the figure.
Every CO₂e and £ figure on this site uses one of these. Source, version, and date all sit on the page next to the number.
If you operate on a green tariff or a PPA, your contractual factor replaces ours. The methodology is the same; only the number changes.
The Portal is the production system that ships every claim above. It runs across our customers' infrastructure and reports the same numbers we'd put in front of an auditor.
Live wattage per node, with a CPU / RAM / GPU breakdown. Method-accuracy badges so the auditor (or you) can see at a glance where the number came from: green for IPMI/RAPL, amber for mixed methods, red for estimation.
Fleet kWh, kg CO₂e, and GBP cost on a 24-hour rolling window, with annual projections and per-location breakdowns. Same numbers across the dashboard, the API, and the PDF report.
Per-endpoint detail, GPU detail, VM allocation, methodology footer. The kind of artefact a sustainability team can hand to assurance without an apology.
Week-over-week and month-over-month deltas for kWh, kg CO₂e, and £. Cost allocation across companies based on actual consumption, useful for MSPs reporting per-customer impact.