AI buffet is gone! The cost of GitHub Copilot is too high to bear, starting from 6/1 it will be billed based on usage.

Unable to bear the high cost of computing power, GitHub Copilot announced that starting June 1, it will cancel the “unlimited” plan and switch to usage-based billing. This move has sparked strong backlash and a wave of cancellations among developers, highlighting the infrastructure challenges faced by the AI industry.

The era of AI unlimited plans is over, GitHub Copilot changes its billing model

Previously praised as the “all-you-can-eat” AI service, GitHub Copilot is now turning from an all-you-can-eat buffet into a regular restaurant.

According to The Register, Microsoft-owned GitHub admitted in a recent announcement that due to an inability to continue absorbing losses, it has decided that starting June 1, 2026, the billing method will shift from request-based to usage-based.

Under the original model, subscribers could submit a fixed number of advanced requests, but without considering task complexity, leading to high computational costs for prompts, far exceeding subscription revenue.

Image source: GitHub Copilot announcement of the end of the AI unlimited era, GitHub Copilot changes billing model

GitHub Product Lead Mario Rodriguez revealed that the cost of simple chat and multi-hour AI programming tasks could be completely the same, as the company has absorbed the rising inference costs, making the current model unsustainable.

As early as April 20, GitHub Copilot began adjusting its personal subscription plans, including suspending new sign-ups for GitHub Copilot Pro, Pro+, and student plans, as well as tightening usage restrictions on personal plans.

Starting June 1, GitHub Copilot introduces virtual billing units

Previously, GitHub Copilot provided unlimited AI assistance services for a fixed monthly fee, earning it the nickname “AI all-you-can-eat,” and compared to mainstream options like Codex, Cursor, and Claude Code, it has been a low-profile, high-cost-performance choice for developers.

However, moving forward to usage-based billing, GitHub Copilot’s charges will be directly linked to tokens.

Because different models have different rates, GitHub has designed a virtual unit called GitHub AI Points, valued at $0.01 USD. Microsoft will convert user input, output, and cached tokens into point costs based on standard API rates.

Rodriguez stated that future GitHub Copilot subscription plans will include a fixed amount of AI points per month, with optional additional purchases.

Due to the uncertainty of usage-based billing, users cannot pre-know how many tokens a specific input will consume, and involving other tools makes calculations more complex. Therefore, GitHub plans to launch a billing preview feature in early May, allowing users to estimate costs before the June transition.

Reddit community backlash, users threaten to cancel subscriptions

Unsurprisingly, the change in GitHub Copilot’s billing model has sparked significant backlash on Reddit.

Some users responded that if billing depends on usage, there are already services like OpenRouter that are free of subscription fees, and the new system effectively makes users pay the full API price, completely losing the value of a subscription.

Many annual subscribers feel their rights are being compromised, estimating that costs for certain models could skyrocket by dozens of times, and are demanding cancellations.

There are also voices in the community calling for migration, with many developers saying they will switch to tools like Claude Code or Cursor, or even upgrade hardware to run open-source models like Alibaba’s Qwen 3.6 27B locally.

Image source: Reddit GitHub Copilot billing changes spark backlash on Reddit

OpenClaw sparks new wave, AI infrastructure overload

The change at GitHub Copilot reflects the broader infrastructure challenges faced by the AI industry.

In February this year, the open-source AI assistant OpenClaw, nicknamed “Lobster,” drew widespread attention, prompting many developers to experiment with AI agents running 24/7 for various tasks, and the improved capabilities of models encouraged more exploration of AI programming.

This has led AI companies that previously offered subscription subsidies to face enormous demand far beyond their inference infrastructure capacity, including giants like Anthropic and OpenAI, which have also experienced capacity issues. Claude Code recently fixed major bugs that improved output quality and reduced delays (commonly called “dumbed down” or “decreased intelligence”) and reset user quotas.

Until the industry finds a way to balance costs and user experience, the resource consumption driven by massive AI computational demands will continue to cause a “price correction effect” across the entire AI sector.

Further reading:
Claude Code really got dumber! Officially admits three major bugs, user subscription quotas fully reset

Legislators propose banning data centers, environmental groups criticize ecological disaster! The First Lady is walking into the White House with an AI robot

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments