Morson Edge Newsroom

Is the AI boom quietly becoming a water problem?

Uncommon Sense

18.02.2026

By the end of this decade, global freshwater demand is projected to exceed supply by around 40%, at the same time as AI‑driven technologies are expected to push global water use from roughly 1.1 billion to 6.6 billion cubic metres a year by 2027.

For boards governing water utilities, this creates a profound paradox, AI is simultaneously one of the most powerful tools for resilience and one of the fastest‑growing, largely ungoverned sources of water stress in the wider system.

The macro picture: AI’s rising water footprint

Recent work from the UK Government Digital Sustainability Alliance shows how quickly AI’s water demand is scaling.

“Training a single large model such as GPT‑3 consumed in the order of 700,000 litres of water, while broader AI expansion is forecast to multiply global water use sixfold within just a few years.” (UKGov Sustainable ICT)

This water is consumed at three points in the AI lifecycle:

  1. directly in data centres for cooling
  2. indirectly in water‑intensive electricity generation
  3. upstream in semiconductor and hardware manufacturing.

The challenge is intensified by geography. Over half of the world’s data centres are now located in river basins with high risks of water pollution or scarcity, and nearly 68% sit in or near protected or high‑biodiversity areas that depend on clean water flows.

US‑focused analyses suggest that large facilities can each use millions of litres of water per day for cooling, putting them in direct competition with agriculture and domestic supply during droughts. While these figures vary by climate, design and cooling technology, the direction of travel is clear. AI is becoming a material driver of local and regional water stress, even as regulators in places like the EU are only beginning to require basic disclosure of data‑centre freshwater consumption

Against this backdrop, water utilities globally are being encouraged, and sometimes mandated, to accelerate digital transformation and AI adoption. Sector research highlights several converging pressures;

  • ageing infrastructure
  • climate‑driven volatility
  • tightening regulatory standard
  • non‑revenue water
  • increasingly scarce specialist skills.

AI and advanced analytics sit at the heart of many proposed responses, from leak detection and pressure optimisation to predictive maintenance and energy‑efficient pump scheduling.

For utilities in Europe and North America, AI‑enabled asset intelligence is becoming integral to justifying investment to regulators, demonstrating performance improvements, and maintaining public trust after high‑profile service failures.

In rapidly urbanising regions of Asia, Latin America and Africa, digital technologies are framed as a way to leapfrog legacy constraints, combining satellite data, smart meters and AI to extend safe water access under tight capital and operational constraints. Across these contexts, boards are being told, often credibly, that without AI they will struggle to meet resilience, affordability and climate commitments.

Governance and ethical paradox

This is where the double bind emerges. A utility might deploy AI to reduce leakage, optimise abstractions and improve demand management, generating genuine local water savings and emissions reductions. Yet the AI workloads enabling those wins are frequently run in cloud data centres located far outside the utility’s catchment, drawing on water resources that may already be under stress and affecting communities and ecosystems that the utility has no formal mandate to protect.

This challenge has several layers that matter to senior leadership:

Scope of responsibility

Historically, water companies have defined stewardship in terms of their regulated geography and assets. AI extends their water footprint into other basins through digital infrastructure, raising uncomfortable questions about whether “responsible water use” should now cover these outsourced impacts.

Visibility and metrics

While some jurisdictions (for example, the EU) are beginning to require data‑centre reporting of freshwater consumption, there is no standardised, widely adopted way for corporate AI users to understand the water intensity of specific cloud services or models. This means most utilities cannot currently quantify the water cost of, say, a new AI‑driven customer service system or optimisation engine.​

Equity and legitimacy

Investigations in the US and elsewhere have documented tensions between data centres and local residents, including cases where industrial water use is perceived to compromise domestic supply quality or availability. If a publicly scrutinised water company is seen to benefit from AI services that exacerbate such tensions abroad, it risks reputational and political issues that transcend its formal licence conditions.

It is reasonable to suggest that the sector is at risk of solving one set of water problems by externalising others, in ways that are hard to see, measure, or govern.

Practical levers for responsible AI adoption in water utilities

The challenge is not an argument for stepping away from AI but an argument for taking ownership of its wider water consequences. Several practical levers are emerging that boards can use to shape strategy.

Water companies can extend existing ESG procurement practices to explicitly include digital water criteria:

For global groups operating across multiple jurisdictions, this also means aligning procurement standards beyond the minimum local regulatory requirement, using the strictest regime (for example, EU data‑centre reporting rules) as a baseline for all markets.​

Treat AI water use as part of enterprise‑level risk

Boards already receive dashboards on leakage, abstraction, drought risk and flood resilience. AI‑related water use can be integrated into these frameworks:

  • Include high‑level estimates of AI‑related water demand and exposure in risk registers, particularly where major strategic programmes rely on intensive computation (e.g. large‑scale digital twins, system‑wide optimisation, or extensive customer‑facing generative AI).​
  • Ask internal teams and external advisors to develop scenario analyses. If AI workloads double over five years, what is the likely trajectory of our digital water footprint, and how does that intersect with emerging regulation and stakeholder expectations?

This reframes AI as both an opportunity and a potential liability in enterprise risk management, rather than a purely technical investment.

Partner on location, technology and innovation

There is a growing set of mitigation technologies and siting strategies, but they require joined‑up thinking:

  • Encourage and co‑design data‑centre siting strategies that favour cooler climates, low‑stress basins, or coastal locations using non‑potable or desalinated water, while accounting for biodiversity and community impacts.
  • Explore pilots where treated wastewater or reclaimed industrial water supports nearby data‑centre cooling, turning a potential conflict into a circular‑economy partnership that can be showcased to regulators and the public.​
  • Track and, where viable, support low‑water cooling innovations such as advanced air‑cooling and liquid immersion systems, while scrutinising their trade‑offs in energy use and chemical management.​

In some regions, water companies could position themselves as infrastructure partners of choice for responsible digital development, rather than passive consumers of cloud services.

Extend reporting and narrative beyond the licence area

Finally, there is a narrative opportunity. As AI’s environmental footprint gains attention, utilities can choose to lead rather than react:

  • Voluntarily disclose high‑level information on the water implications of major AI programmes, the safeguards embedded in procurement, and any partnerships with low‑impact data‑centre providers.​
  • Integrate “digital water” into existing sustainability and integrated reporting, alongside operational and catchment metrics, to demonstrate that stewardship is being re‑imagined for a digitised system.
  • Use stakeholder engagements (citizens’ panels, regulator briefings, investor roadshows) to surface the dilemmas openly and invite input on what “responsible AI” should mean in a water‑stressed world.

This kind of transparency can shift the conversation from accusations of hypocrisy toward a more constructive, system‑wide dialogue about trade‑offs and innovation.

As AI becomes embedded in core operations, from leakage analytics to customer engagement, every water company will, implicitly or explicitly, take a position on whose water is used to sustain its digital ambitions. The sector has a long tradition of thinking in terms of catchments, aquifers and communities; AI extends that field of responsibility into far‑flung watersheds that most boards have never discussed.

The question, then, is not whether they will use AI (they almost certainly must do) but whether they will do so as passive consumers of opaque services, or as active stewards shaping a new standard for “net‑water‑positive” digital transformation. That choice will signal, more clearly than any slogan, how seriously their organisation takes its role in the world.

Luciana Rousseau leads the development of human-centred strategies that connect behavioural research with organisational transformation. With deep expertise in the psychology of work, Luciana helps leaders understand the motivations, behaviours, and cultural dynamics that shape performance. Get in touch with her at Luciana.rousseau@morson.com

Let’s talk about how we can help you shape smarter, more inclusive ways to attract and retain specialist talent. Because at the sharp end, there’s no time to stand still.

To top