Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Enterprise AI agents, the hidden safety blind spots behind productivity... widening governance gaps
In the context of enterprises accelerating the adoption of artificial intelligence (AI), the “autonomous agents” that improve work efficiency have also become new security vulnerabilities. Notably, many companies have deployed AI agents into their internal systems but have failed to establish corresponding trust and governance systems to manage them, which is identified as a core risk.
KnowBe4 CEO Brian Palma stated at KB4-CON 2026: “The fundamental issue in the security field today is the gap between the speed of adoption and the speed of governance system development.” He emphasized that companies should first “identify” and “understand” the AI agents operating within their systems. He explained that AI agents should be treated the same way as human employees have been in the past security industry. This means viewing them as untrained assets, understanding their behavior first, then protecting them.
Palma likened AI agents to being at a primary school level. He explained that they cannot distinguish malicious instructions and are easily misled by erroneous commands or malicious code. He stated, “The core of building trust lies in ‘transparency’.” It is essential to clearly understand which agents exist, what systems they connect to, and what resources they can access.
From “People” to “AI Agents”… Expanding Security Management Scope
In response to this change, KnowBe4 is expanding its existing human risk management platform to cover AI agent security. The company’s AIDA is a tool for automating and personalizing employee security awareness training, while the newly launched “Agent Risk Manager” focuses on inventorying AI agents running in enterprise environments, identifying their connection paths and access permissions, and setting corresponding policies and restrictions.
According to Palma, this tool will first compile a list of the current AI agents within the enterprise. Then, it will track which processes each agent uses and where they connect, such as email systems or financial systems. The final step is to set up “barriers” that can distinguish what an agent “can” and “cannot” do.
This indicates that recent enterprise security strategies are restructuring into a “dual-threat” framework. Because the same AI agent can both enhance productivity and serve as a channel for attackers to exploit. Ultimately, companies face the need to design both defensive and offensive strategies simultaneously.
AI Threats Are Increasingly Advanced… “Incidents Triggered by Agents Will Rise in the Next Year”
KnowBe4 states that its AI models are trained based on behavioral data accumulated over 15 years from 70k organizations and over 100 million users. Palma pointed out that this is what sets the company apart from competitors. According to the company’s “2025 Human Risk Status” report, 45% of cybersecurity leaders see “evolving AI-driven threats” as the biggest challenge.
He added that on the AIDA platform, compared to manual operations, personal risk scores have decreased by about 4 percentage points. This indicates that AI-driven customized security training has played a positive role in improving actual user behavior.
Additionally, KnowBe4 is expanding its support from Microsoft Copilot to Gemini, Claude, and ChatGPT to address a “multi-LLM” environment. They believe that since enterprises will not rely solely on specific large language models, AI agent risk management must also be capable of handling multiple models simultaneously.
Palma warned that in the next year, cases of AI agents directly causing security incidents due to vulnerabilities could significantly increase. He said, “Agents expand the attack surface. Deployment itself is very important and effective, but it also brings great risks.”
The focus of enterprise AI security has now shifted beyond the simple question of “whether to introduce” to “who is using which AI agents and where they are connected.” Some argue that for an AI strategy prioritized on productivity and innovation to succeed, trust and control systems must be established beforehand, a view that is increasingly gaining recognition.
TP AI Notes: This article is summarized based on the TokenPost.ai language model. The main content in the text may be omitted or inconsistent with facts.