OpenAI has no "new policy"; an AI blueprint unwilling to pay the price

Original Title: No “New Deal” for OpenAI
Original Author: Will Manidis
Translation: Peggy, BlockBeats

Editor’s Note: Today, OpenAI released Industrial Policy for the Intelligent Age, trying to respond to an issue that’s quickly coming due: as AI reshapes the structures of production, employment, and distribution, how will the social contract be redefined?

This document offers a set of policy framework elements that seem comprehensive—ranging from public wealth funds, social safety nets, labor participation, energy infrastructure, and pathways for retraining. But what’s truly worth attention isn’t these proposals themselves, but the deeper tensions they reveal: a tech industry that is becoming an infrastructure provider is trying to participate in distribution in the form of “recommendations,” while it isn’t yet ready to bear the matching responsibilities and costs.

The article follows this thread, breaking down the file item by item. On one hand, nearly every proposal corresponds to a policy route that already exists—yet has been blocked in real-world politics. On the other hand, the document repeatedly emphasizes “may,” “should,” and “can be discussed,” but it lacks any concrete commitments from the companies themselves—whether it’s taxation, capital transfers, or institutional constraints. It proposes outcomes, while evading the mechanisms and power structures required to achieve them.

More importantly, this document seems to be built on a premise that doesn’t exist: that structural redistribution can be smoothed out through dialogue, workshops, and incremental policy design. History has never worked like that. From the New Deal to institutional evolution in key industries such as energy, rail, and communications, what often truly drives the reconstruction of the “social contract” isn’t consensus—it’s concessions and rebalancing after conflict.

Meanwhile, real-world counterforces have already emerged: from resistance to data centers at the local level, to cross-state legislation and community organizing. The costs of AI are being felt and carried by specific groups of people, while the benefits are highly concentrated among a small number of companies. This asymmetry is turning into a political problem.

Therefore, this isn’t just a policy document—it’s a posture for negotiation. And at the core of the issue, the question becomes clear: when the AI industry tries to take on an “infrastructure” role, is it willing—like the key industries in history—to proactively cede part of its benefits in exchange for institutional stability and social acceptance.

Otherwise, the window period will eventually close.

The following is the original text:

OpenAI released a policy brief today. It’s a thirteen-page document titled Industrial Policy for the Intelligent Age. Judging by all aspects, this is a policy text that has been carefully thought through and hopes to be taken seriously.

Unlike many past releases from OpenAI, this time the document is clearly designed for “print distribution.” The entire PDF is beautifully laid out, suitable for printing on glossy paper, and passed back and forth among a group of well-dressed lobbyists in the lounge of a high-end club—each holding a $18 non-alcoholic Negroni, one hand wearing a Rolex, the other wearing a Whoop wristband.

At the same time, these documents will be brought into the power core by AI-friendly lobbyists who have recently flooded into Washington, D.C. They wear brand-new suits, live in upscale apartments around Dupont Circle, and place this document on the desks of various key lawmakers.

In the first part of Our AI Dilemma back in February this year, I wrote about the changes happening at the grassroots level right now: the New Brunswick city council unanimously voted down a proposal to build a data center; hundreds of people hit the streets trying to stop the push for AI infrastructure; far away in New Delhi, corporate executives were still downplaying the employment disruption caused by AI, while the American public was already preparing for possible conflict. I also mentioned that 188 organizations from two states were coordinating legal action, and that around $162 billion worth of AI projects had already been blocked or delayed.

I warned back then that using the usual soothing talking points would not solve any of the problems facing this industry.

That article also had a second part. I distributed it to multiple relevant people, in a private way, including those working across various labs and in the U.S. government. In that section, I ran a very detailed “scenario” exercise: assuming a small group of highly organized actors, how they might delay or even destroy the U.S. AI ecosystem through asymmetric violence.

Later, I gradually formed a clear judgment: there was no safe way to publicly release that content. Still, it had been distributed to enough places—and enough critical ones—that a substantial number of people had read it.

So, this OpenAI document can be understood as a response to the increasingly widespread, cross-partisan anti-AI sentiment inside the United States. But it is clearly not a standard, run-of-the-mill “reassurance statement.”

And there’s no question: it is also one of the strangest documents in the tech industry so far.

  1. AI leaders should use the “New Deal” analogy cautiously ===================

Right from the start of this briefing, OpenAI uses the Progressive Era (Progressive Era) and the “New Deal” as references, trying to explain how society can get through the AI transition period.

The Progressive Era and the New Deal did, in fact, help society reconstruct the social contract after the world had been reshaped by electricity, internal combustion engines, and mass production.

This narrative isn’t new. Who knows—Less Wrong has been using this framework for a long time. But it deserves serious scrutiny, because the “history” it invokes is not the history people have actually lived through.

The New Deal wasn’t a product of peaceful cooperation between capital and labor. It didn’t originate in meetings in Washington, nor was it the result of industry leaders and policymakers sitting down to discuss “how to share prosperity.” In essence, the New Deal was a “settlement” reached after decades of industrial violence. That violence was driven by organized labor pressuring capital—bleeding for it, even paying with lives, and eventually accumulating enough political power to force this institutional arrangement to pass.

In 1892, Pinkerton guards shot and killed 11 steelworkers at Homestead. In 1897, police shot 19 unarmed miners from behind in Latimer. In 1911, 146 garment workers were burned alive in the Triangle Shirtwaist Factory fire—because management locked the exits. In 1914, the National Guard used machine guns to fire into a tent encampment in Ludlow and set it on fire, killing 25 people, including 11 children; Rockefeller even directly paid for these soldiers’ salaries. In 1921, 10k armed miners and 3,000 others fought for five days in Blair Mountain, firing millions of bullets and even using military bombers; in the end, 925 miners were indicted on charges of treason. In 1937, on Memorial Day, police shot and killed 10 striking workers from the Republic Steel Company.

Francis Perkins personally witnessed the women in the Triangle factory jumping out of windows, and then spent the next thirty years gradually building the institutional system that supported the New Deal. I’m very clear that I do not endorse terrorism. But if, when discussing the New Deal, you deliberately ignore that it formed amid domestic conflict and quasi-insurrection conditions, then discussing it this way is itself absurd.

The 40-hour workweek wasn’t the result of voluntary concession by capital—it was “fought” from capital by those willing to risk being shot, jailed, or even charged with treason. The Wagner Act wasn’t a gift from enlightened capitalists; it was pushed through in a context where factory owners hired private armed forces to shoot their own workers. Social Security wasn’t consensus either; it was the minimum concession capital made to avoid armed revolution. Antitrust action wasn’t initiated by Standard Oil on its own; after the government witnessed its buying of state legislators, it realized that without action, the Republic itself faced collapse.

When OpenAI invokes this history, it is, in practice, invoking a process—in which it was supposed to be the target party, though it may or may not recognize that. The New Deal formed because the industry, under organized power, electoral pressure, and “credible threats of violence,” was forced to accept these concessions to avoid a revolution. The designers of those institutions didn’t sit down to ask Andrew Carnegie for his view on the “social contract.” They acted only after witnessing Carnegie’s private armed suppression of labor.

And this document, on one hand, cites the “conditions” of that institutional remaking, while admitting nothing about the sources of power that made everything happen. It seems to imply a strange assumption: that we can reach the same end point through dialogue, workshops, email communication, and even API credits.

That’s not how it happened. It has never happened that way in history. The New Deal was never a PDF, and we should stop treating it as a PDF.

  1. What exactly are these “proposals” saying ==============

I want to break down these proposals more carefully, because the information they reveal is genuinely interesting. Each recommendation in the article can, in reality, be matched with a corresponding legislative version—bills that were proposed, discussed, and ultimately failed to pass. In stitching these proposals together, the document almost never mentions this history, but precisely because it doesn’t, it gives us a window into the current situation.

The document also acknowledges a risk: the economic gains brought by AI may become highly concentrated in a small number of companies—such as OpenAI.

And one of OpenAI’s most “peculiar” concessions is that it can seize most of the returns from AI, while also publishing a fairly “humble” posture to discuss what concessions it can make to the public. The question is whether this posture is truly an effective negotiation strategy—it’s not obvious.

The document says: these ideas are our initial contribution to this effort, but only the beginning. OpenAI is: (1) collecting and organizing feedback via email; (2) setting up pilot projects, providing up to $100k in scholarships and research grants, as well as up to $1M in API credits, to support relevant policy research; and (3) holding a series of discussions at the OpenAI Workshop, opening in Washington, D.C. in May.

But the reality is that this document does not commit to any new capital investment. For a company with annualized revenue of around $25 billion and preparing for an IPO approaching $1 trillion in valuation, grants at the $100k level are essentially rounding errors.

The biggest “concession” in the document is actually API credits—i.e., usage allowances for its own products. In essence, it distributes its own “currency” at nearly marginal cost. In other words, it offers a “coupon” from its own store, but describes it as a public investment.

Next come the specific proposals: giving workers a voice in the AI transition to improve job quality and safety. This includes establishing formal mechanisms so employees can collaborate with management, ensuring that AI applications improve job quality, enhance safety, and respect labor rights.

At root, this is describing “a union.” But in a full thirteen-page document, the term “union” appears only once.

Historically, the mechanism that truly enables workers to collaborate formally with management is collective bargaining. And it is precisely this mechanism that gave rise to the New Deal and the subsequent labor rights framework. But this document mentions collective bargaining not at all.

It describes the results of organized labor—voice, participation rights, constraints on harmful deployments—while deliberately avoiding the prerequisite conditions for those results: power.

If workers can’t gain a voice in AI deployments through institutional participation, they will ultimately seek that power through organized action until companies can no longer deploy AI around them. The document offers a “conclusion,” but provides no mechanisms to achieve it.

This isn’t accidental. Any cross-partisan policy trying to unionize large white-collar workforces at scale would face extremely strong business-level backlash and thus would be doomed to fail from the start.

Have workers first deploy AI applications that improve job quality—for example, removing dangerous, repetitive, or tedious tasks—so employees can focus on work with higher value.

But the reality is: New Brunswick’s city hall is packed not because data centers automatically automate “dangerous or repetitive work.” The deployment scenario with real political mobilization power is different: companies use AI to replace work that isn’t dangerous, isn’t repetitive, and isn’t boring—but is nevertheless valued by people, skilled work, and something they rely on to make a living.

This is exactly the scenario described by Sam Altman. He said customer service roles will be “completely disappear.” He said the jobs AI replaces may not be “real jobs.” He said children born in 2025 “very likely will never be smarter than AI.”

And this document avoids these issues. It describes an AI deployment closer to “factory safety systems”—a version that doesn’t threaten anyone—and then formulates policy recommendations based on that. But this world doesn’t exist.

Help workers convert domain experience into entrepreneurship opportunities. By lowering the barrier to entrepreneurship with AI—such as micro-grants, revenue-based financing, and “turnkey” startup support (like standard contracts, shared back-office services, etc.)—so small businesses can participate in competition quickly.

This may be one of the most bizarre proposals in the entire document. It repackages a large-scale labor problem as “an entrepreneurship opportunity.”

The assumption it implies is: a customer service worker or legal assistant who loses their job in Ohio or Pennsylvania can, with a micro-grant and template contracts, create their own AI company and compete in a market dominated by a large company with billions of computing power resources.

It sounds more like the old saying repackaged in policy language: have the workers replaced by automation “learn programming.”

Or, in other words—write “vibe code.”

Treat access to AI as a basic condition for participation in the modern economy, analogous to large-scale efforts to improve global literacy, or ensuring electricity and internet coverage reaches remote parts of the world.

OpenAI is proposing to treat usage rights to its sold products as a public necessity comparable to electricity or literacy. The electricity analogy is especially telling, because opponents say OpenAI’s data centers are driving up electricity prices in their communities.

To some extent, this evokes the Tennessee Valley Authority (TVA), which as part of the “New Deal” brought electricity to rural communities. But TVA isn’t a “coupon program” run by electricity companies. Electricity had to be forcibly transformed into a public utility because private companies failed to serve rural and low-income groups; the government built the infrastructure directly through the Rural Electrification Act. The REA didn’t hand out electricity points that could be redeemed from electricity providers—it built power lines.

And OpenAI’s proposal is the opposite: government subsidizes public usage of products developed and sold by a private company valued at nearly $1 trillion.

Policymakers can rebalance the tax base by increasing reliance on capital—for example, raising capital gains tax rates for high-income groups, corporate income tax, or taking targeted measures on continuing AI profits—while also exploring new approaches such as taxing automated labor.

Pay attention to this verb: “can.” Pay attention to the subject: “policymakers.” In practice, OpenAI is proposing that others go through democratic procedures and, at some future point, consider making OpenAI pay more tax. The document doesn’t specify how much OpenAI would pay, when it would pay, what tax rate would apply, or through what mechanism.

Meanwhile, OpenAI completed its transition to a public benefit corporation in October 2025, removing the profit cap, and is preparing for an IPO at an estimated valuation nearing $1 trillion. The purpose of designing this transition is to maximize the company’s ability to attract capital on favorable terms.

But this document makes no specific tax commitment. It doesn’t propose that OpenAI earmark a certain proportion of its revenue, profits, or IPO proceeds for public use. It merely suggests there may be a discussion at some point in the future.

Policymakers and AI companies should work together to determine how to seed this fund—a fund that can invest in diversified, long-term assets to capture growth from AI companies and from companies adopting and deploying AI more broadly.

A public wealth fund may be the most substantive proposal in the entire document, and it deserves acknowledgment. The Alaska Permanent Fund, Norway’s sovereign wealth fund, and a New Mexico fund are all real precedents. Designing mechanisms that tie distributions to a “job replacement threshold” is also quite novel at the operational level, and could even be more serious than any proposal Congress has made on this topic.

But wealth funds must have funding sources. The document only says AI companies and policymakers should “jointly determine how to inject capital into the fund.” OpenAI hasn’t said it would contribute. Norway’s oil fund works because Norway taxes oil at about 78%. Alaska’s permanent fund exists because the state constitution mandates that 25% of mineral revenues be allocated to the fund. But this document offers no similar mechanism—it proposes only a “discussion.”

Notably, on February 3, 2025, Donald Trump signed an executive order requiring the creation of a sovereign wealth fund. The order instructs the Treasury Secretary and the Commerce Secretary to submit a plan within 90 days. Treasury Secretary Scott Bessent said they would establish the fund within 12 months. The president, meanwhile, said they hope to catch up to the scale of Saudi Arabia’s $8B public investment fund. The White House fact sheet also notes that the federal government currently holds about $5.7 trillion in assets and has additional reserves of natural resources.

This isn’t a fringe proposal—it’s a real initiative being pushed by the sitting president, with a clear name, timeline, and cabinet-level executing body.

OpenAI’s proposed public wealth fund overlaps strongly with this initiative by the president. But it doesn’t mention the executive order, the 90-day plan requirement, or the government’s implementation process. It also doesn’t propose providing real value to the fund using OpenAI equity, revenue, or any other form. OpenAI is happy to mention this concept in a way that both echoes its own narrative and matches the president’s phrasing, but it isn’t willing to commit to putting in even a dollar, or to propose any mechanism that would channel its profits into the fund.

This is more like a rhetorical “tithe.”

Create new public-private partnership models to finance and accelerate the infrastructure needed to support AI expansion with energy. Specific approaches could include: using targeted investment tax credits, direct or indirect flexible subsidies, equity investments, and other tools to reduce the cost of capital; removing market barriers to advanced technologies; and, where it aligns with national interests, granting limited authority to the federal government to accelerate the construction of cross-regional transmission projects.

This is one paragraph where OpenAI’s commercial interests and the policy proposals in the document are almost indistinguishable. OpenAI needs grid expansion. Its Stargate project plans to invest $500 billion, targeting capacity close to 10GW. In October 2025, the company submitted a document to the Office of Science and Technology Policy (OSTP) at the White House, saying $1 trillion in AI infrastructure investment will drive 5% GDP growth within three years. All the subsidies, tax credits, and accelerated approvals proposed in this section would flow directly to the companies building these data centers.

That in itself isn’t necessarily a problem. Companies have always lobbied for subsidies and more favorable approval conditions, and sometimes they do get them. The current administration has also made it clear that AI infrastructure is key to national competitiveness, and I agree with that. For grid expansion, public-private partnerships do have justification. But it should be labeled as such, accurately.

Incentivize employers and unions to run time-limited pilot programs to implement a 32-hour/four-day workweek without pay cuts. The goal is to maintain output and service levels unchanged, then convert the time saved into permanent reductions in working hours, accruable paid leave, or a combination of both.

Here, “union” is mentioned for the first time. OpenAI proposes that employers and unions collaborate to shorten working hours. Meanwhile, OpenAI released a company-level “Red Alert” in December 2025, pausing non-core projects to accelerate development, and plans to nearly double headcount to 8,000. I don’t know every OpenAI employee, but those I know seem to work overtime on weekends rather than enjoying a four-day workweek. Proposing leisure for the people it replaces, while demanding intense work from those it hires—that kind of proposal is indeed worth pondering.

In the history of the U.S. economy, companies voluntarily sharing productivity gains are almost nonexistent. Over the past fifty years, real wages relative to productivity have basically stagnated. Historically, the mechanism that forced companies to share gains with workers was organized labor—and that is exactly what this document has been trying to describe as its “results” while avoiding naming it. You can’t invoke the “New Deal” while refusing to say how the New Deal happened.

Ensure the existing social safety net can operate stably, quickly, and at scale, and design a mechanism for temporary expansion that automatically triggers when relevant indicators exceed preset thresholds.

An automatic triggering mechanism tied to “job replacement” metrics is a truly interesting policy design idea. It borrows from the theory of automatic macroeconomic stabilizers—meaning government spending should automatically kick in during economic downturns without new legislation. There is serious economic research on this area already.

But the document doesn’t explain who provides the funding when the trigger mechanism activates. It doesn’t propose thresholds. It doesn’t define indicators. And it doesn’t explain what happens when industry representatives question whether those indicators are misleading, or argue that job losses are only temporary, or claim that AI gains are underestimated. A “mechanism” without commitments, without a funding source, and without a governance structure cannot constitute policy.

Gradually build a benefits system not dependent on a single employer. Use portable accounts to expand access to healthcare, retirement, and skills training, so that individuals can move across different jobs, industries, education programs, or entrepreneurial paths while still retaining benefits.

“Portable benefits” isn’t a new concept; at least it has a history of twenty years. The Aspen Institute’s Future of Work Initiative has studied it since at least 2015. The Affordable Care Act (ACA) exchange system is also a step toward decoupling healthcare coverage from employment relationships. In 2019, Senator Mark Warner also proposed related legislation. Adding this into a policy brief themed around “superintelligence” is like writing “we should invest in public education”—it’s correct and non-controversial, but almost irrelevant to the current context.

Expand opportunities in caregiving and the connected economy—covering childcare, eldercare, education, healthcare, and community services—as a pathway for absorbing labor displaced by AI. During the process of AI reshaping the labor market, as long as there is paired training, wages, and job quality, these fields can absorb workers in transition.

This is the first time in the document that it depicts a “post-AGI economy” landscape: more of the U.S. population will work in caring for children and the elderly.

Follow the logic: AI replaces productive white-collar labor; productivity gains flow to AI companies and their shareholders. The displaced workers receive some sort of public wealth fund dividend, social security payments, and retraining subsidies. They are retrained into the caregiving economy—childcare, eldercare, home care. The caregiving economy is mainly funded by government programs (Medicare, Medicaid, state budgets). Then these workers spend their income in a consumer economy that has no human production base.

This is a closed-loop cycle of government transfers: AI completes production, and returns are owned by capital. The government redistributes some portion to displaced workers. Those workers enter caregiving jobs funded by the government. Funds circulate between government, workers, caregiving services, and government again. In this scenario, there is no real economy—no wealth creation, no ownership, and no productive capacity. Only some people operate AI and capture returns, while others recycle through caregiving services that receive government transfer payments.

And this caregiving economy meant to “absorb jobs” is itself in the midst of one of the largest-scale fraud investigation waves in U.S. benefits system history. Under Dr. Mehmet Oz, the Centers for Medicare & Medicaid Services (CMS) are carrying out a comprehensive crackdown on Medicare home-care fraud. In just Minnesota, federal funds totaling more than $1 billion were delayed due to a single quarter where $240 million in claims was found to be unverifiable or possibly fraudulent. Nationwide, in fiscal year 2025, the Medicare Fraud Control Unit recovered nearly $2 billion, securing more than 1,000 criminal convictions—among which the number of convictions for individual caregiving services was higher than any other type of healthcare service. In 2025 so far, the government has paused $5.7 billion in Medicare payments alleged to be fraudulent. Three weeks ago, New York exposed a $120 million Medicare and Medicaid fraud case. Between 2018 and 2024, home-care spending grew from $937 million per month to $2.5 billion.

The “safe haven” the document proposes for the U.S. economy is an industry whose spending has already doubled and is one that the federal government has deemed to be rife with fraud—an especially criminally convicted sub-sector of healthcare—while the current administration withholds billions in funding from multiple states due to insufficient state-level regulation.

In effect, this document is asking the American public to accept the following pathway: OpenAI causes you to lose your white-collar job. The government sends you money through a public wealth fund. You are retrained into eldercare. Your wages are paid by Medicaid. Medicaid is under fraud investigation. The fund that sends you money was established at a workshop attended by AI executives. OpenAI keeps all productivity gains and prepares to go public. You spend the government money on government-funded childcare so you can work in government-funded eldercare. And if you want to study all of this, you can also apply for funding from OpenAI to research the economic displacement issues caused by OpenAI.

I’ll pause here, because a pattern that needs to be stated plainly has already emerged in these proposals: the document proposes a public wealth fund, expanding the social safety net, portable benefits decoupled from employment, government-funded caregiving reemployment, a tax-base reshaping that tilts toward capital, and efficiency dividends implemented through a four-day workweek.

Substantively, these are all liberal policy outcomes—almost exactly Bernie Sanders’ policy agenda.

I’m not arguing against those outcomes. I only want to point out that politically, the document is entirely incoherent. Those outcomes require liberal policy tools to implement: new taxes, expanded government spending, new welfare programs, organized labor, and a Congress willing to appropriate funding for social infrastructure. But the document doesn’t propose any of those tools. It operates in a “MAGA” context to produce liberal outcomes, while leaving the implementation path to “democratic procedures”—meaning future other people at some later point—when the political environment today is already moving in a direction almost opposite to these proposals.

This document exists in a political vacuum. It assumes these proposals can be evaluated in a neutral, rational environment. But such a world has never existed. In the real world, there is a clear governing coalition, with clear priorities—and those priorities are incompatible with most of the proposals in the document. A serious policy document should directly respond to this reality: whether these proposals can be implemented in the current environment; what legislative routes are needed; what political support is required; and what the timeline looks like.

But the document provides none of this. It doesn’t name committees. It doesn’t describe legislative pathways. It doesn’t count votes. It doesn’t say which people in Congress would support a public wealth fund. It doesn’t explain which committee would have jurisdiction over a dynamic safety net, or how portable benefits would survive budget reconciliation procedures. It doesn’t respond to the fact that the House last year tried to comprehensively ban state-level AI regulation. It doesn’t touch budgeting, deficits, or the current attitude toward additional welfare spending. It doesn’t explain how these proposals would be scored by the Congressional Budget Office (CBO), or how funding sources would match.

OpenAI hired some very serious policy researchers, but this document seems not to understand how Washington works. It proposes liberal outcomes in a conservative political environment, without providing liberal tools—published by a company publicly aligned with the current administration—while asking to be treated as serious industrial policy.

Build a distributed AI experiment network to scale up the ability to test and validate AI-generated hypotheses.

This is a reasonable research proposal—while also being a proposal to create a distributed institutional customer base for OpenAI products using taxpayer funds, covering universities and hospitals. The document proposes that this infrastructure should not be concentrated among a small number of elite institutions. But it doesn’t mention that the AI models driving these systems will likely still be concentrated in a small number of elite companies, including OpenAI.

Frontier AI companies should adopt governance structures that embed public-interest accountability—for example, a Public Benefit Corporation—and, through governance mechanisms aligned with their mission, ensure that AI gains are widely shared, including long-term charitable and public-benefit investments.

OpenAI completed its PBC transition in October 2025. Prior to that, it went through long legal disputes with the California and Delaware attorneys general, with many details still entangled in lawsuits brought by Elon Musk. The transition removed the profit cap, removed the original 100x return limitation that would route excess gains back to non-profit missions, and cleared the way for the company to go public. The former non-profit entities that controlled the company now hold 26% equity, slightly below Microsoft’s 27%.

The document says that a public benefit corporation is a governance model suitable for frontier AI. But it must be said plainly: what PBC actually is, and what it actually requires, because the role played by that label is far larger than its structure itself.

I should explain: I’ve had friendly relationships with some of the people who invented PBC, and I also had the chance to learn from people driving the B Lab movement. They are very serious people. I may differ from them politically, but I don’t doubt their sincerity. The concept itself is real—brands like Patagonia have indeed adopted this structure and expanded it to 43 states, with approval in most cases.

The issue isn’t the people—it’s the structure, especially whether it truly has the capabilities the document claims. Legally, a PBC only requires a company to “consider” the interests of stakeholders beyond shareholders. Note that word: “consider.” There are no enforcement mechanisms, no penalties for not doing so. In the twenty years since Delaware’s PBC law was implemented, there has not been a single successful case where the public mission was forced through by shareholders. Not one. Even if it goes into litigation, the remedies are limited to injunctions, with no monetary damages. A company can register as a PBC, write a public mission into its charter, and still operate exactly like a traditional corporation—because no one can force it to comply. This structure is more like a brand label with legal costs—like making a New Year’s resolution to “cancel the gym membership.”

AI data centers should bear their own energy costs, avoid subsidization by residents, and create local jobs and tax revenue. That is the entirety of the document’s response to the most direct, most specific, and most organized opposition forces currently facing it.

In February, I wrote that between May 2024 and June 2025, U.S. data center projects totaling about $162 billion were blocked or delayed due to community organizing opposition. More than 188 organizations across more than two states were coordinating legal action, and two-thirds of the protested projects were halted. A Republican won a seat in the Texas state senate with a campaign built around explicit opposition to data center development. In New Brunswick, hundreds of people packed into the municipal hall before the meeting started, and hundreds more gathered in the streets—eventually leading the city council to unanimously reject the proposal.

Since February, the situation has worsened further. The opposition facing the industry is becoming more organized—and the document doesn’t acknowledge this, and appears not to know it.

In just the first six weeks of 2026, more than 300 data-center-related bills were proposed across more than 30 states nationwide. At least 12 states proposed bills to pause new data center construction, including Georgia, Maine, Maryland, Michigan, Minnesota, New Hampshire, New York, Oklahoma, Rhode Island, South Dakota, Vermont, Virginia, and Wisconsin. Maine may become the first state to pass such a bill. The House already has bipartisan support for passage, and the Senate is expected to pass as well. The governor has stated support.

It’s important to make this clear: these actions are not loose public sentiment. They are organized, legislative-level political actions happening in state legislatures in real time, and they are not divided along party lines.

  1. What this industry truly needs to “pay” ================

Every proposal in the document corresponds to a piece of legislation that has already failed or stalled: either it died in committee, or was voted down; either it was weakened by the industry; either it failed due to lack of funding; or it remained only in white papers. The 32-hour workweek never reached a vote; the wealth tax was introduced four times but never made it into committee; the PRO Act passed once in the House but got stuck in the Senate; the caregiving provisions in Build Back Better collapsed after one senator withdrew support; broadband subsidies expired, and 23 million households lost coverage; SB1047 was voted down; even a robot tax didn’t get a bill number. This document strings these “half-dead” proposals together, strips them of their political context, and treats them as a “starting point for discussion.” But the discussion already happened—these proposals have already failed.

The deeper problem isn’t whether these proposals are outdated. It’s that the document makes no commitments. It doesn’t require OpenAI to do anything. No sacrifices. No value transfer.

In response to public action and regulatory pressure, what truly works is a set of action logic—and action logic means paying a cost. A document like this—performing concern in Washington-style language while refusing to transfer AI gains from companies to the communities and workers that bear the costs—is doomed to fail from the start.

Let me be clear: this isn’t a left-wing argument. It’s not an argument supporting violence or unions. It’s a “survival” argument. Historically, any industry that succeeded in crossing waves of strong public opposition made concessions—not out of altruism, but because the cost of not doing so was even higher.

In the 1870s, the railroad magnates didn’t voluntarily accept the Interstate Commerce Commission, but the ones that survived were the companies that accepted price regulation before the government imposed even harsher measures. The nuclear power industry accepted extremely high regulatory costs because otherwise the public wouldn’t have allowed it to build. North Sea oil companies accepted Norway’s 78% extraction tax because otherwise they would have been nationalized.

This document suggests that policymakers can consider raising taxes on capital. OpenAI can commit to paying. It proposes a public wealth fund—OpenAI can fund it. It proposes that data centers bear energy costs—OpenAI can implement it proactively across all operating regions. It proposes PBC governance—OpenAI can restore the profit cap that was removed six months ago.

But none of that is in the document. The document has only a workshop, some scholarships valued in its own product, and an unattended email inbox.

The AI industry still has a window period. Every industry that has experienced similar waves of opposition had that window. But that window means: before the opposing forces coalesce, voluntarily accept constraints that will truly affect financial statements and truly consume profit. Once the window closes—as I wrote in Our AI Dilemma—it won’t reopen. The relationship between the industry and the public becomes permanently adversarial. Tobacco had a window, fossil energy had a window, social media had a window—and in each case, the industry chose short-term optimality, and the window closed accordingly.

  1. How we got to this point =============

I’ve spent my entire career in the AI field. I make no secret of my pro-AI stance. I believe the technology is transformative, and I believe the United States should take a leading role in its development. I also believe that OpenAI has achieved extraordinary results, and very likely will achieve more. I’m writing this not as an outsider.

But I also remember what it looked like before all of this happened, and the distance between then and now is worth reflecting on seriously.

In the past few years, the relationship between the tech industry and the federal government has undergone a profound transformation, and I’m not sure anyone has fully absorbed this change—especially those who lived through it. Not long ago, the default posture of almost every tech company when dealing with government was total detachment and distrust. Unless you receive a subpoena, you don’t go to Washington. Washington is where “good companies get harmed.” If you truly have to go, you spend tens of millions of dollars every month hiring lobbyists to handle the government relationship for you, and then try not to think too much about it. The whole industry treated the federal government like a weather system—you monitor it, prepare for it, respond at a distance when necessary, but you generally don’t actually get involved.

Then things changed. Political reshuffling in the past few years produced a strange, short, but exciting phase—people called it “tech right.” It existed in its own way, and it was real. Founders went to Washington and suddenly found they had “opinions” about many things. They went to traditional foundations (Heritage) and Hillsdale College, discovering that there really were people interested in what they said. They started writing policy memos, buying suits, and sometimes even remembering to remove the stitching on the slits in the suit jackets. They attended dinners with senators and social mixers, surprised to find that senators were genuinely happy to meet them. That feeling was like “coming home,” and also like a weird reunion—a rush of strong belonging and engagement that made it clear: this was something new, something different, and we all were a bit tense.

Maybe that phase is ending, or it has already ended. What remains is different from what we originally thought we would get. Founders who went to Washington didn’t bring back a durable and clear theory explaining how technology and democratic governance should coexist. What they brought back was networks, access, and a sense of “we should also sit at the table.” But that table was set by people who had already been sitting there for decades, who know exactly how it works, and who would continue to sit there even after the tech industry turned to the next wave.

After this strange and short “false spring,” what truly remained was something more consequential and less romantic. Today, the U.S. already has a set of strategically crucial tech companies—their importance touches national security, economic competitiveness, and countless aspects of everyday life for hundreds of millions. The capital scale of these companies can rival that of states. A huge portion of GDP growth depends on whether they succeed or fail. What they are building is infrastructure that will last for decades.

And the way these companies interact with the government now is as if they hold the chips in their hands. This is the backdrop that must be inserted to understand Industrial Policy for the Intelligent Age. At its core, it’s a negotiation posture.

We’ve never seen tech companies act like this before. We’ve seen defense contractors negotiate with the government, but defense contractors understand that their entire business fundamentally depends on whether the government allows them to exist. We’ve seen oil companies negotiate with the government, but oil companies understand that the resources they extract fundamentally belong to the public. We’ve also seen telecom companies negotiate with the government, but telecom accepted the “common carrier obligation,” because that was the cost required for enjoying monopoly status.

But the AI industry has accepted nothing like that so far. It hasn’t acknowledged that it operates under public permission; it hasn’t accepted that the resources it consumes come from the communities providing those resources; and it hasn’t offered any “tithe.”

But this industry needs to do so. What it truly needs isn’t another set of recommendations to policymakers who already vetoed relevant proposals, but to make binding commitments that transfer real value from the company to the communities that carry it.

This isn’t about sounding noble—it’s about costs. If you don’t pay, the costs will be higher.

[Original Link]

Click to learn more about BlockBeats hiring roles

Welcome to join the BlockBeats official community:

Telegram subscription group: https://t.me/theblockbeats

Telegram group chat: https://t.me/BlockBeats_App

Official Twitter account: https://twitter.com/BlockBeatsAsia

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin