Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Fed’s Bowman flags rising AI risks to banks, calls for coordinated oversight
Federal Reserve Vice Chair for Supervision Michelle Bowman urged regulators to work more closely together as artificial intelligence tools rapidly make their way into the banking system, warning that the same technology helping firms defend themselves could also be turned against them.
Speaking at a Financial Stability Oversight Council roundtable on cybersecurity and artificial intelligence, Bowman said regulators are still figuring out “how best to oversee” these fast-moving technologies as banks begin integrating them into core operations. One example she pointed to was Mythos, an advanced system built by Anthropic that can scan software for vulnerabilities.
“Anthropic’s Mythos… shows the dynamic nature of this technology and how quickly its capabilities can develop.”
The concern, Bowman indicated, is straightforward but serious: tools that help banks find weaknesses in their systems could just as easily be used by attackers to exploit them.
Safer ways for banks to adopt AI
Behind the scenes, regulators are now wrestling with a practical question — whether existing rules are enough.
For years, banks have operated under model risk frameworks designed to keep quantitative systems in check. But AI, especially newer generative models, doesn’t always behave in predictable ways. That makes it harder to test, monitor, and explain — all things regulators typically expect.
Officials from the Federal Reserve, Office of the Comptroller of the Currency, and Federal Deposit Insurance Corporation are now working together on guidance meant to outline safer ways for banks to adopt AI, Bowman said.
The approach, at least for now, leans toward supervision rather than strict rulemaking — giving banks flexibility, but also leaving some uncertainty about where the lines will ultimately be drawn.
Crypto investors see ripple effects
The implications of AI aren’t limited to banks. Investors in digital assets are also watching closely, particularly as money flows shift between sectors.
Macro strategist Lyn Alden warned that enthusiasm around AI-related stocks could eventually hit a ceiling:
“It could be that the AI stocks eventually just peak, they get so silly big that they can’t get realistically much higher.”
If that happens, she suggests, capital could rotate elsewhere — potentially into assets like Bitcoin.
Meanwhile, investor Raoul Pal pointed to a broader theme driving both AI and crypto:
“They’re both really network effects.”
That dynamic — where value grows as adoption expands — is one reason both sectors have attracted intense investor interest.
U.S. takes a lighter regulatory touch
Compared with Europe, U.S. regulators are still taking a relatively flexible approach.
The European Union’s EU AI Act sets out strict requirements for high-risk AI systems, including those used in finance. The U.S., by contrast, is moving more cautiously, relying on broad principles rather than detailed rules — at least for now.
That gap could matter for global banks operating across jurisdictions, where compliance expectations may begin to diverge.
Tensions inside Washington
Complicating matters further is a growing policy divide within the U.S. government over Anthropic itself.
The U.S. Department of Defense has labeled the company a supply-chain risk after it refused to loosen safeguards on how its AI can be used, according to Reuters.
At the same time, the White House is exploring ways to keep access to cutting-edge AI open, potentially allowing agencies to work around that designation. The split highlights a broader tension: how to balance national security concerns with the push to stay competitive in AI.
Senior officials, including Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell, have already met with major banks to discuss the risks — a sign that the issue is being taken seriously at the highest levels.
Timeline: How the issue unfolded
Early 2026 — Anthropic develops advanced AI systems including Mythos
April 2026 — Pentagon designates Anthropic a supply-chain risk
Late April 2026 — White House drafts guidance that could bypass the designation
April 2026 — Treasury and Federal Reserve meet banks to assess AI risks
May 1, 2026 — Bowman calls for coordinated oversight
Coordination, not fragmentation, is the key
For banks, the immediate challenge is practical: how to use AI tools without exposing themselves to new kinds of risk.
For regulators, the challenge is broader — building a framework that keeps pace with a technology evolving faster than the rules designed to govern it.
Bowman’s message was clear: coordination, not fragmentation, will be key as AI becomes more deeply embedded in the financial system.
The smartest crypto minds already read our newsletter. Want in? Join them.