Google "First Confirmed AI-Generated Zero-Day Exploit" ... Attack tool development fully underway

robot
Abstract generation in progress

Google Threat Intelligence Group (GTIG) announced that they have captured the first confirmed case of an exploit program that uses artificial intelligence (AI) to create a functional “zero-day” vulnerability exploit. Although there are no signs of large-scale damage yet, the fact that AI has been officially used to develop attack tools has heightened security industry vigilance.

According to the report, a criminal hacking group developed a Python-based exploit targeting a widely used open-source web-based system management tool’s “bypass two-factor authentication” vulnerability. Analysis shows they attempted to deploy it in large-scale attacks, but due to errors during implementation, the actual exploitation was unsuccessful. Google has notified the vendor of the related vulnerability, and a patch has now been released.

GTIG explained that multiple obvious signs of AI intervention are visible in the code. For example, inserting severity scores that do not match reality, overly “textbook” Python formatting, detailed help menus, and descriptive docstrings with strong traces of training data. However, Google explicitly stated that its own Gemini model was not used in this operation.

Targeting “semantic logical flaws” that are difficult for traditional security tools to detect

The vulnerability stemmed from a “semantic logical flaw” more complex than simple coding errors. Developers embedded assumptions trusting specific objects into the code, and this high-level design mistake became an attack surface. Such vulnerabilities are hard for traditional security scanners to detect because they tend to consider the code “functionally normal.”

In contrast, GTIG explained that the latest large language models (LLMs) demonstrate advantages in inferring developer intent and identifying hidden flaws in seemingly sound logic. This indicates AI has advanced beyond simple automation tools into a new stage where it can understand the context of security reviews.

GTIG Chief Analyst John Hultquist stated, “The idea that an AI-based vulnerability race is about to begin is a misconception. The reality is that it has already started, and if a zero-day related to AI is confirmed, there are likely many undiscovered cases.” He diagnosed that threat actors are using AI to accelerate attack speed, scale, and sophistication.

China, North Korea, Russia utilizing AI throughout attack processes

GTIG believes this case is not an exception but part of a broader trend. The report shows that state-supported hacking groups linked to China, North Korea, and Russia are using AI at every stage of attacks, including reconnaissance, vulnerability analysis, malware development, and influence operations. Criminal organizations are also adopting similar methods to produce malware faster and operate on a larger scale.

It was observed that North Korea-associated threat group “APT45” used a method involving sending thousands of repeated prompts to recursively analyze vulnerabilities and verify proof-of-concept (PoC) exploits. This is interpreted as an attempt to build attack assets that are difficult to manage without AI assistance.

It is also reported that a group possibly linked to China, “UNC2814,” employed a technique called “jailbreak” (i.e., role-based prompt injection) to induce Gemini to investigate pre-auth remote code execution (RCE) vulnerabilities in TP-Link router firmware and Odette file transfer protocol.

Another Chinese-affiliated actor was found using frameworks “Hexstrike,” “Strix,” and a memory system called “Graphiti” to autonomously explore a Japanese tech company’s and an East Asian cybersecurity platform’s systems. The report notes they switch reconnaissance tools based on internal reasoning, minimizing personnel intervention.

Spread to Android backdoors, fake code, and voice cloning

The report also introduces “PROMPTSPY,” an Android backdoor malware that, during operation, calls the Gemini API to interpret user interface elements on smartphones and automatically generate touch coordinates. This indicates AI is being integrated into mobile attack automation.

Analysis shows that Russian-related malware series “CANFAIL” and “LONGSTREAM” hide their true malicious functions by inserting AI-generated decoy code. Additionally, investigators found that Russian actors, in their “Overload” influence operation, used AI voice cloning to produce fake videos impersonating real journalists and spread them targeting Ukraine, France, and the US.

Meanwhile, the criminal group “TeamPCP” is accused of being behind the March intrusion into the popular AI gateway utility “LiteLLM.” Investigations revealed they embedded credential-stealing tools via malicious PyPI packages and pull requests, stealing AWS keys and GitHub tokens, and monetizing through ransomware partnerships.

Google: “Blocked malicious accounts, expanding AI defense tools”

As a countermeasure, Google stated it has blocked malicious accounts abusing Gemini and is expanding the use of AI-based defenses, such as the vulnerability detection proxy “Big Sleep” and patching tool “CodeMender.”

The core of this report is that AI is no longer just a laboratory-level auxiliary tool. From creating zero-day exploits and hiding malicious code to automating mobile backdoors and spreading disinformation, AI has penetrated every corner of attack scenes. Some analysts believe the security competition is shifting from “who can detect and respond to vulnerabilities faster” to “who can better control and utilize AI.”

TP AI precautions This article was summarized using the TokenPost.ai-based language model. The summary may omit key content from the original or be factually inaccurate.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin