AI-compliant assets: Quantum is "re-evaluating"

Author: Zhang Feng

Currently, artificial intelligence has been deeply integrated into social production and daily life to an unprecedented degree, and its security and governance system forms the cornerstone of the digital era. However, a computing revolution rooted in physical principles—quantum computing—is quietly approaching. Its potential disruptive power subjects existing security defenses and governance frameworks to severe scrutiny. Will quantum computing upend today’s AI security and governance systems? This is not only a technical question, but also a whole-of-society challenge concerning the future digital society’s order.When a leap in computing power meets rule lag, how do we prepare for “Q-Day”?


I. How does quantum computing threaten the currently widely usedasymmetric encryption algorithms?

The security of today’s AI systems—from model transmission and data storage to identity authentication—relies heavily on asymmetric encryption algorithms represented by RSA and ECC (elliptic curve cryptography). The security of these algorithms is built on the “computational complexity” of mathematical problems such as “integer factorization” or “discrete logarithms,” which classical computers cannot solve within a reasonable time.

However, quantum computing brings a fundamental paradigm shift. Quantum algorithms represented by Shor’s algorithm can, in theory, reduce the solution time of these hard problems from exponential to polynomial. A paper review notes that the latest quantum algorithms, including the Regev algorithm and its extensions, are continually improving the efficiency of breaking asymmetric cryptography. This means that once a sufficiently large (typically referring to general-purpose quantum computers with millions of stable qubits) quantum computer is introduced, the “locks” protecting internet communications, digital signatures, and encrypted data today could be opened instantly.

This threat is not far off. Research by the ZhiYuan community warns that this is a “now-in-progress” threat: attackers can intercept and store encrypted communication data today (including AI training data, model parameters, etc.), and then wait for future quantum computers to mature before decrypting it. This “intercept first, decrypt later” strategy exposes all high-value information that needs long-term confidentiality—including state secrets, commercial patents, and personal privacy data—to future risk. Therefore, the threat of quantum computing to asymmetric encryption is fundamental and systemic; it directly undermines the very foundation of today’s AI security and even the security system of the entire digital world.

II. In the face of quantum computing, what new challenges doAI model training and data privacy protection face?

The development of AI depends on feeding massive amounts of data and training complex models, a process that itself is full of privacy and security challenges. The involvement of quantum computing makes these challenges sharper and more complex.

First, the failure of long-term confidentiality in the data lifecycle. As mentioned earlier, today’s AI training datasets encrypted for storage or transmission in the cloud may be completely exposed due to future quantum decryption. A white paper on the global post-quantum migration strategy by Xi’an Jiaotong-Liverpool University clearly states that adversaries worldwide are actively implementing this “data harvesting” strategy, patiently waiting for the arrival of “Q-Day” (the day when quantum computers become practical). This poses a source-level threat to AI models trained on sensitive data (such as medical records, financial information, and biometric characteristics).

Second, privacy-computing technologies such as federated learning face new tests. Federated learning protects raw data by training models locally and only exchanging model-parameter updates. However, the gradient or parameter update information exchanged in these interactions is also encrypted transmission. If underlying encryption is broken by quantum computing, attackers can infer the original data characteristics of the participating parties in reverse, causing privacy protection mechanisms to become effectively meaningless.

Finally, the difficulty of model theft and intellectual property protection increases dramatically. A trained AI model is a core asset for enterprises. At present, model weights and architectures are typically distributed and deployed via encryption. Quantum computing may render these protections ineffective, allowing the model to be easily copied, reverse-engineered, or tampered with, leading to serious intellectual property infringement and security vulnerabilities. The China Academy of Information and Communications Technology, in its “Blue Book of AI Governance,” emphasizes that AI governance must address risks such as technology misuse and data security. Quantum computing, without doubt, amplifies the destructive power of these risks.

III. How will the development of quantum machine learning affect theAI security and ethics review framework?

The combination of quantum computing and AI—quantum machine learning (QML)—signals a new round of breakthroughs in performance. But at the same time, it also brings unprecedented new safety and ethical issues that challenge existing review frameworks.

On the safety front, QML could generate more powerful attack tools. For example, quantum algorithms may greatly accelerate the generation of adversarial samples, producing more covert and more destructive attacks that quickly make today’s AI security defense systems based on classical computing (such as adversarial training and anomaly detection) obsolete. Some analyses call “quantum + AI” the next battlefield of life and death in cybersecurity, pointing out that relevant regulatory frameworks must be improved in a forward-looking manner.

On the ethical front, QML’s “black box” characteristics may be even more profound than those of classical AI. Its decision-making process is based on quantum superposition and entangled states, which may be harder to explain, audit, and hold accountable. There has already been extensive discussion of the ethical debates and risks brought by QML, such as algorithmic fairness, responsibility attribution, and technical controllability. How will existing AI ethical guidelines (such as transparency, fairness, and accountability) be implemented at the quantum scale? How will regulators review a decision model built on quantum circuits that may be in a superposition of multiple states? These are all problems that existing ethical review frameworks are not yet prepared to handle. The governance model needs to shift from merely ensuring technical compliance to a deeper understanding of the essence of quantum characteristics and their social impacts.

IV. Can existingAI governance regulations (such as the GDPR) respond to the security changes brought by quantum computing?

Current AI and data governance regulations, represented by the EU’s General Data Protection Regulation (GDPR), have core principles such as “privacy by design and default,” “data minimization,” “storage limitation,” and “integrity and confidentiality,” which still provide guidance at the conceptual level. However, regarding specific technical implementations and compliance requirements, they are facing a “compliance gap” brought about by quantum computing.

The GDPR requires data controllers to take appropriate technical and organizational measures to ensure data security. But in the context of quantum threats, what constitutes “appropriate” encryption measures? Continuing to use algorithms proven to be insecure against quantum attacks is very likely to be regarded in the future as failing to fulfill security safeguard obligations. When faced with advanced attacks launched using quantum computing that may be completed instantly and leave no trace, how can the GDPR’s time limits for data breach notification be effectively enforced?

Lawmakers worldwide have already recognized the necessity of change. The “2025 Global AI Governance Report” shows that countries are accelerating the formulation of specialized AI governance laws and establishing high-level coordination bodies. In China’s “Digital China Development Report (2024),” it emphasizes the need to “accelerate the improvement of data foundational systems” and to continuously advance the “AI+” initiative. These developments indicate that governance systems are proactively adjusting. However, regulations specifically for the intersection of “quantum computing + AI” are, for the moment, still almost blank. Existing regulations lack provisions on specific issues such as post-quantum cryptography migration timelines, QML model audit standards, and quantum-era data security tier classification—making it difficult to effectively respond to the security transformations that are coming.

V. What are the application prospects and implementation challenges of post-quantum cryptography inAI systems?

The most direct technical approach to counter quantum threats is post-quantum cryptography (PQC). PQC refers to cryptographic algorithms that can resist attacks from quantum computers. It is not based on quantum principles; rather, it is based on new mathematical problems believed to be difficult to solve quickly even with quantum computers (such as lattice-based, code-based, multivariate, etc.).

The application prospects in AI systems are broad and urgent. PQC can be used to protect every link in an AI workflow: encrypt training data and model files with PQC algorithms; use PQC digital signatures to verify the integrity and authenticity of model sources; and establish PQC-secured communication channels between distributed AI computation nodes. Fortinet points out that PQC is not a distant concept, but a practical solution that is urgently needed to protect digital systems from potential quantum threats.

However, comprehensive deployment of PQC faces significant challenges:

Performance and compatibility challenges: Many PQC algorithms have key sizes, signature lengths, or computational overheads that are far greater than existing algorithms. Integrating them into AI training and inference workflows that are sensitive to computational efficiency and latency may create performance bottlenecks. At the same time, it is necessary to upgrade all relevant hardware, software, and protocol stacks to ensure compatibility.

The complexity of standards and migration: Although institutions such as the U.S. NIST are advancing the standardization of PQC, it still takes time for the final standards to be finalized and for global unification. A recent industry update from the Beijing Municipal Managed Service Center for Commercial Secrets shows that the industry is actively open-sourcing implementations of NIST candidate algorithms to help different sectors respond to the threats. The entire migration process is a large and complex systems engineering effort, involving risk assessment, algorithm selection, hybrid deployment, testing, and full replacement—especially so for a structurally complex AI ecosystem.

New security risks: PQC algorithms are a relatively new research area, and their long-term security has not yet been subjected to real-world cryptanalysis challenges over decades in the way RSA has. Rushing deployment of PQC with unknown vulnerabilities in AI systems is itself a risk.

VI. In the face of this transformation, waiting passively for “Q-Day” is dangerous

The disruptive impact of quantum computing on today’s AI security and governance systems is real and imminent. It does not completely overturn the existing systems; instead, by dismantling their cryptographic foundations, amplifying data risks, complicating ethical issues, and highlighting regulatory lag, it forces the entire system to undergo a deep, forward-looking upgrade.

In the face of this transformation, it is dangerous to wait passively for “Q-Day.” We recommend the following actionable paths:

Launch quantum security risk assessment and create an inventory: Immediately conduct quantum threat assessments of core AI assets (especially models and data involving long-term sensitive information), identify the most vulnerable components, and build a migration priority inventory.

Formulate and implement a PQC migration roadmap: Track progress from standardization bodies such as NIST, and begin planning PQC integration in the development and operations of AI systems. Prioritize “encryption agility” design in newly built systems and critical systems to enable seamless replacement of cryptographic algorithms in the future. Consider using a hybrid encryption mode of “classical + PQC” as a transition.

Promote adaptive updates to the governance framework: Industry organizations, standards bodies, and regulators should collaborate to research and incorporate quantum-resistance requirements into AI security standards, data protection regulations, and product certification systems. Establish research frameworks and guidelines in advance for the ethical review of QML.

Strengthen cross-disciplinary talent development and research: Cultivate composite talents who understand both AI and quantum computing and cryptography; encourage the inclusion of quantum threat models in AI security research; and fund R&D of anti-quantum AI security technologies.

The challenges brought by quantum computing are huge, but they also provide an opportunity for us to re-examine and reinforce the foundations of the digital world. Through proactive planning, coordinated innovation, and agile governance, we are entirely capable of building a more resilient AI future—one that can embrace quantum computing performance dividends while also resisting its security risks.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin