Futures
Hundreds of contracts settled in USDT or BTC
TradFi
Gold
Trade global traditional assets with USDT in one place
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Participate in events to win generous rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and enjoy airdrop rewards!
Futures Points
Earn futures points and claim airdrop rewards
Investment
Simple Earn
Earn interests with idle tokens
Auto-Invest
Auto-invest on a regular basis
Dual Investment
Buy low and sell high to take profits from price fluctuations
Soft Staking
Earn rewards with flexible staking
Crypto Loan
0 Fees
Pledge one crypto to borrow another
Lending Center
One-stop lending hub
VIP Wealth Hub
Customized wealth management empowers your assets growth
Private Wealth Management
Customized asset management to grow your digital assets
Quant Fund
Top asset management team helps you profit without hassle
Staking
Stake cryptos to earn in PoS products
Smart Leverage
New
No forced liquidation before maturity, worry-free leveraged gains
GUSD Minting
Use USDT/USDC to mint GUSD for treasury-level yields
Trump's AI Farce: If You Don't Pay, I'll Insult You
Title: Trump’s AI Farce: If You Don’t Pay, I’ll Curse You
Author: 0x2333
Source:
Repost: Mars Finance
Anthropic CEO Dario Amodei sent a 1,600-word internal memo to all employees last Friday. This memo was exposed today by tech media The Information, instantly igniting a firestorm across Silicon Valley.
The core message of the memo is just one sentence: Anthropic was blacklisted by the Trump administration not because of security disagreements, but because of a lack of donations.
$25 Million Gap
Dario specifically mentioned OpenAI in the memo.
He stated that the real reason the Trump government disliked Anthropic was: the company didn’t donate to Trump and didn’t offer “dictatorial praise.” Meanwhile, OpenAI provided both money and gestures.
In September 2025, OpenAI President Greg Brockman and his wife donated $25 million to Trump’s MAGA Inc. super PAC. According to Federal Election Commission filings, this was MAGA Inc.'s largest single donation that period, accounting for nearly a quarter of their half-year fundraising total. Brockman later posted on social media that the donation was to “support policies that promote American innovation,” praising the Trump administration for “being willing to engage directly with the AI community.”
CEO Sam Altman took a different approach. He didn’t donate large sums directly to MAGA Inc., but in December 2024, he donated $1 million to Trump’s inauguration committee. More importantly, his stance: the day after Trump took office, Altman stood behind the presidential seal in the White House Roosevelt Room and announced the $500 billion AI infrastructure project Stargate, telling Trump on camera, “This wouldn’t be possible without you.” At the White House tech dinner in September 2025, he again told Trump, “Thank you for being such a pro-business, pro-innovation president.”
Interestingly, this same Sam Altman in 2016 openly wrote: “For anyone familiar with German history of the 1930s, watching Trump’s actions is chilling.” He even compared Trump to Hitler and discussed “big lies.” Before the 2024 election, he donated $200,000 to help Biden’s re-election.
And Anthropic did nothing. No donations, no dinners, no standing behind the presidential seal to thank him.
This is the first time in the AI industry that a top-tier company’s CEO publicly states: your treatment in Washington depends on how much you give to the White House.
Dario also directly criticized OpenAI’s contract with the Pentagon. He said OpenAI accepted the contract because “they care about placating employees, while we care about truly preventing misuse,” and claimed that OpenAI’s PR around the contract was “naked lies.”
The White House’s response indirectly confirmed Dario’s claims. An official told Axios: “You can’t believe Claude is not secretly executing Dario’s personal agenda in classified environments.” Treasury Secretary Scott Bessent directly responded on Twitter, stating that private companies should not influence U.S. national security policies.
None of the responses mentioned technical disagreements over security clauses. It was all personal attacks.
After the memo leaked, former Google CEO Eric Schmidt publicly supported Dario, saying, “Dario is right; this is one of the most important decisions our society faces.” Meanwhile, OpenAI CEO Altman admitted in an internal memo that signing the Pentagon contract just hours after Anthropic’s blacklisting “seems opportunistic and reckless.”
Both sides are betting, with Palantir’s awkward position
The other half of the memo exposes the story of America’s largest defense data analytics company, Palantir, valued at about $350 billion.
Palantir doesn’t build models or chips; it’s like a government-only version of OpenClaw, serving as an intermediary layer that integrates large models like Claude and GPT into chat tools and workflows. Palantir connects others’ AI models into military secret data systems, enabling them to read intelligence, run analysis, and identify targets. 54% of its revenue comes from government contracts, with the U.S. market accounting for 74%. It’s not a partner but a symbiotic relationship with the Pentagon.
By the end of 2024, Anthropic, via Palantir, entered the Pentagon’s classified networks, with Claude becoming the first frontier AI model deployed in U.S. military classified systems. In July 2025, the Pentagon awarded Anthropic a $200 million contract. Palantir manages the Maven intelligence system (the Pentagon’s flagship AI project for intelligence fusion and target recognition), with a contract ceiling of $1.3 billion. The six U.S. combatant commands and NATO all run Claude.
Then problems started. In January 2026, during the operation to capture Venezuelan President Maduro, Claude participated in intelligence analysis via Palantir. Anthropic later asked Palantir whether Claude was involved in firing decisions. Palantir forwarded this question to the Pentagon. The military concluded that this was AI vendors reviewing military operations post-factum, and relations irreversibly soured.
At a critical moment in negotiations between Anthropic and the Pentagon, Palantir made a move that added fuel to the fire: it pitched a self-developed “classifier” security solution to the Pentagon, claiming it could automatically determine whether each use of Claude crossed red lines through machine learning. The implication was: even if Anthropic refused to sign an unrestricted contract, Palantir could control the model itself. This scheme effectively gave the Pentagon a way out—since Palantir claimed it could manage the models, Anthropic’s security clauses became redundant.
Dario tore this plan apart in the memo, saying “about 20% is genuine, 80% is show.” The reasons: models cannot determine if they are in an autonomous weapons loop; they don’t know if the data analyzed is foreign or U.S. citizen data; they don’t know if the data was obtained with user consent or through gray channels; jailbreak attacks are frequent and easy to execute. Palantir’s classifier couldn’t answer any of these four issues.
Dario said Palantir’s true understanding of Anthropic’s stance was: “You have some unhappy employees, and you need to give them something to soothe them.”
On March 3, Palantir CEO Alex Karp, speaking at the top Silicon Valley venture firm Andreessen Horowitz’s Washington defense tech summit, took a shot at unnamed targets: “If Silicon Valley thinks it can take all the white-collar jobs and then screw the military, you’re retarded.” Everyone knew who he was talking about. But what he didn’t mention was: the “non-cooperative” company he referred to is actually his own platform’s core AI supplier.
Palantir sold a security layer to the Pentagon that runs AI from Anthropic’s Claude. Anthropic’s CEO called this security layer a show. Palantir found a reason to oust Anthropic, but after doing so, the biggest loser was Palantir itself.
Engine switching is more painful than expected
Reading this, one might think: Claude is just a model base; can’t we just swap in OpenAI’s GPT or xAI’s Grok, like switching default models in OpenClaw?
Not that simple. Reuters today cited two insiders saying: Maven’s system contains a large number of prompts and workflows built around Claude. It’s not just changing an API endpoint. Prompts, chains, output formats, security audit processes—all are tuned to Claude’s behavior. Switching models means rebuilding and testing a complete pipeline used for military intelligence analysis and target recognition. Insiders say Palantir needs to “rebuild parts of the software.”
The Maven contract ceiling is $1.3 billion, lasting until 2029. Deployment spans all six U.S. combatant commands and NATO. The U.S. National Geospatial-Intelligence Agency plans that by June 2026, Maven will deliver “100% machine-generated” intelligence to warzone commanders. Now that the engine needs replacing, this timeline will likely slip. Wall Street analyst Piper Sandler noted: “Anthropic is deeply embedded in the military and intelligence systems. Accessing and negotiating alternative technologies takes time and resources, which could be better spent on growth opportunities.”
Michael Burry, the famous short-seller behind the film “The Big Short” and Palantir’s most prominent critic, added: “The six-month transition period clearly shows that stickiness is in Claude’s technology, not Palantir’s platform. If Claude were as easy to swap as in OpenClaw, why would the Pentagon give a six-month transition?”
Wall Street doesn’t care about these details. After Anthropic’s blacklisting, boutique tech investment bank Rosenblatt raised Palantir’s target price from $150 to $200, and UBS also upgraded its rating. On March 4, Palantir’s stock rose 3.28%. Meanwhile, between February 20 and March 3, CEO Karp and co-founder Peter Thiel sold over $400 million worth of Palantir shares. Analysts recommend buying, founders are selling.
On the same day the memo was leaked, there was another twist.
Dario told investors at the Morgan Stanley Tech Conference on March 4 that Anthropic is “trying to cool things down” with the Pentagon and “reach a mutually acceptable agreement.” He said that Anthropic and the Pentagon “share far more common ground than differences.” According to insiders, during the five days of being blacklisted, Anthropic executives privately expressed regret over their previous communication approach.
But the leak of the memo might have complicated matters again. Axios reported that White House officials believe Dario’s attack on the Trump administration in the memo “could ruin the chance for reconciliation.” One official said: “You can’t believe Claude is not secretly executing Dario’s personal agenda in classified environments.”
Interestingly, OpenAI is also involved. Altman, when signing the Pentagon contract, proactively asked the government to “provide the same terms to Anthropic” and publicly opposed listing Anthropic as a “supply chain risk.” He called it a “very bad decision.”
The Pentagon has 48 hours to decide on Anthropic. It’s been over a week since Palantir’s engine was dismantled. Now, both sides are back to negotiations.