Burning $1.3 million in a month! The father of lobsters reveals his token bill, with OpenAI covering all expenses

robot
Abstract generation in progress

Yes, you read that right!

Lobster Father Peter Steinberger spent as much as $1.3 million+ on API tokens in one month.

As you can see, the total token consumption over 30 days was 603 billion, with 7.6 million requests,

Netizens exclaimed, “Hiring a development team might be cheaper than this.”

Another user asked, “Bro, you’d better show some real skills. If you can’t do things that million-dollar engineers can’t handle, this ad might just be a sign that the frontier labs bubble is starting to burst. And this price is even subsidized. Oh my God, if you count the actual cost, it would be much more expensive.”

Steinberger replied, “I turned off the fast mode, and the price directly dropped by 70%, so it’s roughly the cost of one employee.”

Someone else mocked directly, “$1.3 million a month? And you didn’t deliver anything? You’re truly the worst marketing genius in history.”

Steinberger also retorted, “Not necessarily, brother. Your definition of ‘nothing delivered’ might be a bit too special.”

Steinberger further stated, “All these codes were written with Codex. Those more chaotic pull requests that I later organized were probably written by Claude.”

After exposing the high token costs and sparking heated discussion, Steinberger quickly responded, saying he was trying to answer a question:

“How will we build software in the future if tokens are no longer important?”

We run about 100 Codex instances in the cloud long-term, reviewing every PR and issue. As long as a fix is merged into the main branch, @clawsweeper will eventually find that old issue that’s been hanging for six months, and close it with an exact reference.

We run Codex on every commit to review security issues, because these are too easy to overlook.

We use Codex to deduplicate issues, discover clusters, and send reports on the most urgent problems.

We have some Agents that can reproduce complex environments, start temporary crabbox.sh machines, log into platforms like Telegram, record videos, and publish before-and-after fixes in PRs.

Some Codex monitor new issues, and if they align with our already defined product vision, they will automatically create PRs for them. Then another Codex reviews these PRs.

We also run Codex to scan comment spam and ban related users.

We deploy Codex instances to verify performance benchmarks and report regressions to Discord.

We have Agents listening to our meetings, proactively starting work. For example, when we discuss new features, they create PRs during our discussion.

We built clatch.ai, splitting all projects into functional units to review, find bugs, and regressions.

In terms of security, we do the same splitting, combined with Vercel’s deepsec and Codex Security to find regressions and vulnerabilities.

All these automations allow us to run this project with an extremely lean team.

The question is, who bears such high costs? Clearly not him.

“OpenAI doesn’t charge me for my token usage.”

Tokenmaxxing: How long can this throughput race last?

This directly kills the competition.

Recently, the AI community has been buzzing about Tokenmaxxing, with major companies including Meta and Amazon, even internal token usage rankings, making token consumption a KPI for employees’ daily work.

At that time, Meta’s top-ranked individual user consumed an average of 281 billion tokens. Depending on the model pricing, this could cost millions of dollars. Lobster Father Peter Steinberger spent 603 billion tokens in a month—an overwhelming blow.

Former Tesla and OpenAI scientist Karpathy admitted on a podcast that he also felt the pressure to maximize AI usage, saying, “It’s all about tokens. How many tokens can you process? How much token throughput can you mobilize?”

Tokens are gradually becoming a new form of production material, even a unit to measure AI operational density. A team with enough token throughput, a sufficiently refined task decomposition method, and a reliable verification loop can achieve engineering density that previously only large teams could handle.

Just now, OpenAI President Greg Brockman tweeted, “Tokens are rapidly becoming a universal input for solving problems.”

But we believe that success isn’t just about quantity, just like Lobster Father’s automated agent development flow. A good project management model might be the key to victory.

Source: Machine Heart

Risk Warning and Disclaimer

The market is risky; investments should be cautious. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions in this article are suitable for their particular circumstances. Invest at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned