Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Futures Kickoff
Get prepared for your futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to experience risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI agent core technology now has a critical flaw: LangChain 'LangGrinch' vulnerability warning
Source: TokenPost Original Title: Critical Flaw in Core AI Agent Technology… LangChain ‘LangGrinch’ Alert Issued Original Link: The core library used in AI agent applications, ‘LangChain core(langchain-core)’, has been found to contain a serious security vulnerability. This issue has been named ‘LangGrinch’, allowing attackers to steal sensitive information from AI systems. This vulnerability could undermine the security foundation of numerous AI applications over the long term, raising alarms across the industry.
AI security startup Cyata Security has publicly disclosed this vulnerability as CVE-2025-68664 and assigned a danger score of 9.3 in the unified vulnerability scoring system(CVSS). The core of the problem lies in internal auxiliary functions within the LangChain core, which may mistake user input as trusted objects during serialization and deserialization processes. Attackers can exploit ‘prompt injection(prompt injection)’ techniques to insert internal token keys into structured outputs generated by the agent, causing them to be processed as trusted objects later on.
LangChain core plays a critical role among many AI agent frameworks, with tens of millions of downloads in the past 30 days and a total download count exceeding 847 million. Considering the entire LangChain ecosystem and its related applications, the scope of this vulnerability’s impact will be extremely broad.
Cyata security researcher Yarden Forrat stated: “This vulnerability is not just a deserialization issue but occurs within the serialization path itself, which is unusual. The storage, transmission, and subsequent recovery of structured data generated by AI prompts expose new attack surfaces.” Cyata has confirmed 12 clear attack vectors, which can evolve from a single prompt into multiple scenarios.
When triggered, the attack can cause remote HTTP requests to leak entire environment variables, including cloud credentials, database access URLs, vector database information, and LLM API keys, among other sensitive data. Importantly, this vulnerability is a structural flaw that exists solely within the LangChain core itself, without involving third-party tools or external integrations. Cyata refers to it as “a threat existing within the ecosystem pipeline layer,” indicating high vigilance.
Security patches to address this issue have been released separately for LangChain core versions 1.2.5 and 0.3.81. Before publicly disclosing the issue, Cyata had notified the LangChain operations team in advance, which has taken immediate response measures and implemented long-term security reinforcement plans.
Cyata co-founder and CEO Shahar Tal said: “As AI systems begin large-scale deployment into industrial settings, the permissions and scope of authority ultimately granted to the system have become a core security concern, surpassing the code execution itself. In the agent ID architecture, permission reduction and minimizing impact scope have become necessary design elements.”
This incident will serve as an industry wake-up call, prompting a re-examination of the security design fundamentals within the AI industry, especially in an era where agent automation increasingly replaces manual intervention.