Pillar 2 · ESG

The infrastructure decision is an ESG decision.

Where you run your AI (cloud, sovereign, or hybrid) shapes its energy footprint, its heat profile, its lifecycle, and the audit trail you can produce at the end of the year. This is the page about that choice, and an honest look at what good actually looks like.

The thesis

The silent hum of servers and cooling systems is the sound of a significant and growing environmental footprint. It does not have to stay that way. But the lever isn't a sustainability slogan. It's the infrastructure choice underneath.

Cloud AI is brilliant for trying things out. For long-running production workloads, the picture changes. The energy is real, the heat is real, the supply chain is real, and the data sovereignty question that finance and security teams already ask quietly maps almost one-to-one onto the question a sustainability officer is starting to ask out loud.

You own the stack, you own the data, and you own the meter. That last one is what makes the ESG number defensible. Pillar 1: Measurable AI is the page about the meter; this page is the page about the stack.

Where the carbon actually sits

People talk about AI's carbon as if it's just the kilowatt-hours the GPU draws while running inference. That's a slice of it. The full picture is bigger:

  • Inference and training energy. The GPU draw during a workflow. Measurable per request when you run it yourself.
  • Cooling overhead. For every watt the chip draws, the building has to spend more to keep it cool. Older data centres run a Power Usage Effectiveness (PUE) of 1.5–1.7. Modern, well-designed ones run closer to 1.1–1.2. The difference is not small.
  • Idle and standby load. Servers that sit warm but idle still draw real power. Cloud abstracts this away from you; on your own kit you can see it.
  • Embodied carbon. Manufacturing the silicon, the boards, the racks, the building. Spread across a five-to-seven-year life. Rare-earth supply, mining, and shipping all sit here.
  • Water. Evaporative cooling consumes drinking water in some jurisdictions. A growing disclosure ask, and one most operators aren't yet ready for.

The point isn't that one number captures it. The point is that an infrastructure decision is a decision about all five.

Heat reuse: what good actually looks like

Server heat is usually wasted. Vented to the atmosphere, dumped into a chiller, gone. A small but growing number of operators are doing something better with it.

A few examples worth naming.

  • In Dublin and Finland, waste heat from data centres warms homes and provides hot water to local communities, including public housing.
  • New York City does similar things for low-income buildings.
  • Germany has rules saying new data centres have to reuse some of their heat.

It turns something wasteful into something useful, cuts carbon, and makes a data centre a better neighbour. None of these are pilots. They are operating today, at scale, in jurisdictions that decided the regulation needed to lead the market rather than wait for it.

The honest follow-up: most operators aren't doing this. Most data centres still run air-cooled, vent the heat, and write their PUE in the marketing materials. The gap between what's possible and what's typical is wide. If you're picking a colocation partner, ask the question. If you're hosting it yourself, look at liquid cooling. For the latest generation of high-density servers, air-cooling is becoming inefficient anyway, and liquid systems open the door to heat-recovery later.

Source: "The Hidden Costs of Cloud AI" and "The Silent Hum: Building Sustainable Data Centers" (mattshore.co.uk).

Sovereignty and sustainability are the same conversation

The people asking "where is our data" and the people asking "what's our carbon" are arriving at the same answer from opposite directions. Both are asking the operator: what do you control, and what can you prove?

  • Cloud AI. Fast to start, infinitely scalable, and the four middle steps of the carbon chain (compute → kWh → grid factor → CO₂e) are a vendor's private metric. Your data sits on someone else's servers. You can negotiate residency clauses; you can't read the meter.
  • Sovereign / self-hosted. Slower to start, deliberate to scale, and every link in the chain is a number you measure. Your data stays in your perimeter. Your kWh comes off your PDU. Your grid factor is your contractual figure.
  • Hybrid. What most organisations actually run. Routine, sensitive, predictable workloads on the sovereign stack. Frontier or rarely-used capabilities still on cloud. Different data-quality disclosures for each, in the same report.

None of this is novel infrastructure engineering. The pieces (Proxmox, Ollama, n8n, vLLM, a smart PDU, a rack-level meter, a managed Kubernetes layer) are commodity. The novel part is the sustainability-reporting engineering: exposing the measurements you already have, timestamping them, and keeping them in a form an auditor can trace.

If your team is reading this because of the Broadcom VMware licensing squeeze or a similar lock-in event, sovereign infrastructure isn't just a sustainability story. It's also the cleanest off-ramp from a vendor that decided to renegotiate. The migration path is well-trodden; the ESG benefit is a side-effect that becomes the headline once the auditor asks.

What most operators aren't doing yet

An honest list. We'd rather name the gap than pretend it's solved.

  • Heat reuse beyond marketing slides. Talked about widely; deployed narrowly. If your colocation partner won't show you a connected heat-recovery loop, it doesn't exist there.
  • Per-workflow disclosure. Major cloud providers publish averages, not workload-level figures. Google publishes a per-prompt figure for one model on its own infrastructure; OpenAI and Anthropic publish nothing comparable. Multiplying Google's number by your volume looks like data; it isn't.
  • Embodied-carbon accounting. The kWh-during-inference number is starting to be reported. The carbon to build the rack is largely off-balance-sheet. CSRD will pull this into scope; most operators are not ready.
  • Water disclosure. Where evaporative cooling is used, water consumption is real and rising. Disclosure is patchy. Expect this to harden over the next two reporting cycles.
  • Lifecycle thinking. Refresh cycles are still being decided on raw performance, not on the per-token carbon-and-cost curve over a five-year window. The maths usually points at running last-generation kit longer than the vendor's roadmap suggests.

None of these are reasons to stop. They are the reasons we built a tooling layer underneath the claims, so that when the framework asks for the next number, you can produce it instead of estimating it.

Tools and platforms we evaluate

Picking the right components matters as much as the architecture. The KB4AI knowledge base ranks self-hosted and sovereign AI tools on three dimensions: Data Security, Trust, and Risk. Same lens an auditor would apply, applied at procurement.

Compute and runtime

Proxmox or managed Kubernetes for the orchestration layer; Ollama, vLLM, or llama.cpp as the inference runtime. All open-source. All run on commodity hardware. None of them ship your prompts to a third party.

Telemetry and ESG instrumentation

The Horizon Portal handles per-agent power, kWh, and CO₂e with method-accuracy badges and a methodology footer on every report. Pulse covers Proxmox-and-Docker visibility for the underlying estate.

Workflow and automation

n8n for the deterministic orchestration. Hermes as the Horizon Portal's AI assistant layer, backed by the same self-hosted Ollama instance and with strict read-only access to the stack. AI-assisted ops without handing over write credentials, and without any prompt or stack data leaving your perimeter.

Governance overlay

Audit logs, access control, and primary-data capture aren't add-ons; they're the substrate. The governance library covers the topics any sovereign deployment needs to think through before it ships.

Where to go next

What this isn't. A claim that sovereign infrastructure is automatically lower-carbon than cloud. At small scale, hyperscalers' efficiency and renewable-purchasing often win on raw kWh. The sovereign win is data quality and control. The kWh number you produce is yours, measured, and defensible. The grid mix and the runtime are still the levers that move the absolute figure.

Read next