Vercel Open-Source AI Security Framework DeepSec: Fully Local Operation to Prevent Data Leakage, Supports Thousands of Sandbox Concurrency

robot
Abstract generation in progress

According to Beating Monitoring, to address security risks in cloud-based AI code scanning, Vercel announced the open source of deepsec, a security testing framework driven by AI Agents. This tool allows developers to directly invoke existing Claude or Codex keys within their local infrastructure to identify vulnerabilities in large codebases, without granting source code privileges to external cloud services.

deepsec internally calls upon Opus 4.7 and GPT 5.5, and is designed with multi-round cross-validation workflows: after initial screening with regex, the Agent intervenes to trace data flow and generate reports; then another set of Agents performs secondary validation to eliminate false positives, keeping the final false alarm rate between 10% and 20%; finally, the system combines Git metadata to identify the contributor responsible for the vulnerability and automatically exports a fix ticket.

For large repositories that require days to scan on a single machine, deepsec supports distributing scanning tasks to Vercel Sandboxes. Vercel revealed that during testing on their own codebase, the typical concurrency can reach thousands of sandboxes. For complex proprietary business lines, the system also offers a plugin mechanism, enabling the Agent to directly write regex matchers for project-specific authentication logic or data layers.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin