Hugging Face sustainable research fellow turned entrepreneur aims to have carbon emission numbers shown alongside every ChatGPT conversation.

Sasha Luccioni, an AI sustainability researcher at Hugging Face, resigns to found Sustainable AI Group, aiming to have energy-consumption and carbon-emissions numbers displayed next to every AI query interface, such as ChatGPT and Claude.
(Background: “The energy consumption truth” of Bitcoin mining farms was exposed by satellite thermal imaging)
(Additional context: A University of California study on the “AI brain fog” phenomenon: 14% of office workers are driven crazy by agents and automation, and their intention to quit is 40% higher)

Table of Contents

Toggle

  • The internal pressure line within companies is moving upward
  • She built an energy consumption leaderboard, but big companies refuse to participate
  • Sustainable AI Group: From researcher to pressure-maker

A software engineer turns on the AI assistant provided by the company to complete a task that could be solved with traditional search. The electricity consumed by this action is something nobody knows the exact number for, including the AI company itself.

After four years at Hugging Face, Luccioni tried to make this number visible. She failed to persuade the industry. So she resigned, planning to keep applying pressure from the outside.

The internal pressure line within companies is moving upward

In a WIRED interview, Luccioni described a scene she’s been hearing more and more often: corporate employees start asking management, “You force us to use Copilot—what impact does this have on our ESG goals?”

There is currently no standard answer to this question, because no major AI company discloses energy-consumption and carbon-emissions data for each query in the product interface. When users use ChatGPT or Claude, they don’t see any prompts about environmental costs.

Luccioni’s demand is direct: put the energy-consumption numbers next to every AI conversation. She believes this isn’t only a transparency issue, but also a competitive strategy. Her logic is analogous to Anthropic refusing the extra brand boost that comes from U.S. government military use—whichever AI company first adopts data centers powered by renewable energy and openly discloses the data can gain a differentiated market advantage.

This demand is starting to find institutional support. The EU AI Act has incorporated sustainability clauses, and the first wave of reporting obligations is being implemented. In Asia as well, including countries collaborating with the International Energy Agency (IEA), data-center transparency is beginning to be required. Regulatory pressure is gradually shifting from the periphery toward the core.

She built an energy consumption leaderboard, but big companies refuse to participate

During her time at Hugging Face, Luccioni built AIEnergyScore, an open-source AI model energy-efficiency leaderboard, attempting to compare electricity consumption across models of different sizes using the same benchmarks.

The problem is that this leaderboard can only evaluate models that are willing to participate. Major large language model providers, including OpenAI, Google, and Anthropic, have not submitted data.

Luccioni’s interpretation of this is not gentle. She points out that big AI companies face structural conflicts of interest: these companies both sell model usage rights and sell the underlying computing resources. The more the user uses—and the more expensive the model—the more directly it corresponds to higher computing-power sales revenue. Under this business model, pushing users to switch to smaller, more energy-efficient models is equivalent to cutting their own revenue.

Her criticism is backed by concrete evidence. The data she has tracked for a long time shows that in most enterprise application scenarios, classification models—lightweight models trained for specific tasks—have, over the past years, carried much of the burden of AI productivity, rather than general-purpose large language models.

For a task like judging the sentiment of customer service emails, handling it with a specialized classification model might require less than one percent of the computing power needed by a GPT-4-level model. This means that without comparative information, enterprises may systematically use models that are ten times or even a hundred times larger than what is actually needed.

Sustainable AI Group: From researchers to pressure-makers

After leaving Hugging Face, Luccioni co-founded Sustainable AI Group together with former Salesforce sustainability chief Boris Gamazaychikov. This combination has deliberately crafted complementarity: one comes from the side of technical measurement, and the other from the side of corporate sustainability governance.

Their overlap aligns exactly with the toughest node to push right now: translating sustainability requirements from the language used by corporate CSR departments into the real decision standards used when procuring AI tools.

Luccioni’s position is not to oppose AI development. The core of her argument is that establishing a mapping between task complexity and model scale is an engineering decision that can reduce both cost and carbon emissions—not just a moral stance. If a company can systematically choose the appropriate model scale based on the task, there may be substantial room for savings in compute expenditure.

Her new organization’s goal is to provide external validation and advocacy support for this decision framework, while continuing to pressure AI companies to disclose energy-consumption numbers.

The EU’s regulatory timeline gives this initiative an external lock-in point. Once companies begin to be required to report sustainability data under the AI Act framework, “AI vendors not providing energy-consumption information” will be upgraded from an ethical issue to a compliance risk. This may be the most closely structure-changing leverage point to date.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned