Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Side-channel attacks exposing AI safety blind spots... Has rule-based detection reached its limit?
Artificial intelligence (AI) security discussions mostly focus on model misoperations or misuse. But some point out that a more urgent issue lies in what existing detection systems “turn a blind eye to.” The recently prominent “side-channel attacks” are considered a typical example that blatantly exposes this detection gap.
Side-channel attacks are not methods that break into software code itself. Instead, they are techniques that analyze physical signals such as power consumption, electromagnetic radiation, and processing time to steal information or interfere with program execution. Even sensitive data like encryption keys can be obtained by measuring signals that are inadvertently leaked by hardware.
Recent studies show that external observers can infer the “topic” of AI interactions simply by analyzing patterns in encrypted traffic. No decryption or inspection of data content is needed. This means meaningful information can be revealed solely through the structure, timing, and sequence of traffic. The problem is that these signals lie outside the scope of current content-centric security tools.
Limitations of rule-based detection
For the past 20 years, security detection has been built on “rules.” Signatures, thresholds, known patterns, and anomaly detection baselines have been core to security operations. The industry has not only introduced more and more rules and more sophisticated rules but also incorporated AI to make them run faster.
However, rule-based detection ultimately requires “objects to compare against” to operate. Alerts are triggered only when there are known traces, obvious deviations, or clear boundary violations. In contrast, side-channel attacks and many of the latest intrusion techniques often bypass these premises.
If an attacker uses encrypted channels, normal tools, or AI-assisted workflows, each individual action may appear normal. Looking at each step alone might show no anomalies, but over time, connecting these steps reveals attack patterns. This is precisely the “detection gap.” It is not a matter of insufficient coverage but rather a structural limitation.
Attacks that even AI can miss
The practical significance of this detection gap is straightforward. Even if attackers operate internally, security teams may receive no signals at all. Not only are there no low-confidence alerts, but there may be no clues to investigate in the first place.
Side-channel attacks are a typical example. The data indeed exists but is hidden within timing differences, sequences, and interaction patterns. Existing tools are not designed to interpret these signals. Slow, stealthy intrusions—so-called “low-and-slow” attacks—or abuse of normal management tools, as well as AI-assisted attacks that change form based on movement paths, are also examples.
The problem is that as enterprises increasingly use AI in both business and attack contexts, such blind spots will expand. Yet, a significant portion of security investments still focus on faster, more effective handling of areas already captured. Automation of rule generation, alert classification, and analysis efficiency improvements are meaningful, but they have limitations against attacks that do not trigger alerts from the start.
Looking beyond events to “behavior”
Some analyses suggest that to narrow this gap, a method is needed that can interpret the “continuity of behavior” rather than individual events. The signals security teams need already exist. Relationships between systems, sequences of actions, changes in access patterns, and the evolution of behaviors over time can all reveal attack intentions.
For example, when an attacker tries to spread internally via encrypted channels, the traces are not in the traffic content but in changes in access methods. Although side-channel attacks do not directly show data, they reveal the structure. Ultimately, the key is not isolated events but processes and context.
Therefore, the view that relying solely on predefined rules or manually written conditions is insufficient to support next-generation detection systems is becoming more convincing. What is needed are models that learn from structured operational data and can even discover patterns not predefined. Ironically, some evaluations point out that deep learning methods that could be used for side-channel attacks can also be employed to detect these subtle traffic patterns.
Security investment standards must also change
From the perspective of security leaders, the key issue is clear: it is necessary to distinguish whether an AI system makes rule-based detection more efficient or can detect behaviors that rules cannot express at all. Both approaches have value, but they address different problems.
For most organizations, the first step is not to add new tools but to calmly assess how far current detection strategies can actually see. Small actions during reconnaissance, covert internal spreading, and activities mixed into normal operations are considered particularly prone to major blind spots.
Reducing detection gaps can not only speed up response times but also help organizations realize earlier that “something is wrong.” This can shorten attacker dwell time, limit the scope of incidents, and increase the chances of taking defensive measures before attackers achieve their goals. It also helps companies better understand their actual risk exposure.
Side-channel attacks are not only a new technique but also reveal important information beyond the boundaries that traditional security systems fail to examine. Ultimately, AI is not the root cause of this problem; it simply makes these previously hidden detection limitations more transparent.