Infostyler in a fake OpenAI repository, Mistral AI blackmail, and other cybersecurity events - ForkLog: cryptocurrencies, AI, singularity, the future

security_new4# Fake OpenAI repository spreads infostiler, Mistral AI leak, and other cybersecurity events

We have gathered the most important cybersecurity news of the week.

  • ZachXBT revealed the identity of the organizer of $19 million phishing attacks.
  • Three suspects were charged in a series of “wrench attacks” in California.
  • A fake OpenAI repository distributed an infostiler.
  • “AI garbage” flooded hacker and cybercriminal platforms.

ZachXBT revealed the identity of the organizer of $19 million phishing attacks

On-chain researcher ZachXBT uncovered details of a crypto theft via phishing totaling over $19 million

1/ Meet Dritan Kapllani Jr, a US-based threat actor linked to $19M from social engineering thefts targeting crypto holders.

Dritan flaunts luxury cars, watches, private jets, & clubs all over social media.

Recently he was recorded on a call showing off a wallet with stolen funds. pic.twitter.com/iDKyUjUm4M

— ZachXBT (@zachxbt) May 12, 2026

The main suspect turned out to be American hacker Dritan Kapllani Jr. The de-anonymization started with his own carelessness.

On April 23, 2026, during a Discord video call, Kapllani argued with a user about the size of their capital (band 4 band). As proof, he showed his Exodus crypto wallet with a balance of $3.68 million.

ZachXBT analyzed the Ethereum transaction chain of the address. It was found that the funds were linked to the theft of 185 BTC on March 14, 2026. The investigation showed that on March 15, Kapllani’s wallet received his share — $5.3 million. By the time of the April video call, the hacker had already spent or laundered about $1.6 million.

During the investigation, the detective also found a connection between Kapllani and earlier incidents. This was aided by cybercriminal John Dagita, previously arrested for stealing over $40 million from the US government. In revenge for past conflicts, he posted one of Kapllani’s old addresses on Telegram.

ZachXBT confirmed his ownership: the fund withdrawal pattern matched exactly that used in the 185 BTC theft. It was also found that in fall 2025, over $5.85 million stolen in five phishing attacks passed through this wallet.

The expert assisted one of the victims in the investigation but deliberately did not publish his findings until official authorities acted.

On May 11, 2026, court documents related to the 185 BTC theft were declassified.

Charges have already been filed:

  • Trenton Johnson — for direct involvement in the theft. He faces up to 40 years in prison;
  • Crypto influencer under the nickname yelotree — for helping launder funds through a car rental business in Miami (up to 30 years in prison).

Kapllani leads a public and luxurious lifestyle, showcasing private jets and expensive cars on social media. He has long avoided arrests — the detective links this “invulnerability” to the common practice of delaying prosecution of minors. Since Kapllani recently turned 18, ZachXBT suspects that charges will be brought against him soon.

Three suspects charged in a series of “wrench attacks” in California

US prosecutors charged Elijah Armstrong, Nino Chindavan, and Jaden Raker with robbery, kidnapping, and conspiracy related to a series of crypto thefts.

According to case materials, the suspects moved from Tennessee to California. To break into victims’ homes, they posed as couriers.

In November 2025, in San Francisco, a “courier” with a box attacked a client at their apartment entrance. The victim was tied with tape, beaten with a gun handle, and threatened into transferring $10 million in Bitcoin and $3 million in Ethereum.

In another “wrench attack,” the victim lost crypto worth $6.5 million.

Armstrong and Raker were arrested in Los Angeles on December 31, 2025, and Chindavan in Sunnyvale on December 22, 2025. They face:

  • up to 20 years for robbery and attempted kidnapping;
  • life imprisonment for conspiracy to kidnap;
  • fines of $250,000 for each charge.

According to CertiK, in 2025, there were 72 “wrench attack” cases worldwide, a 75% increase from the previous year. The total losses from such crimes reached a record $41 million.

Fake OpenAI repository spreads an infostiler

A malicious repository on Hugging Face mimicked OpenAI’s Privacy Filter project to deliver an infostiler. This was reported by HiddenLayer researchers.

Hugging Face allows developers and researchers to share AI models, datasets, and machine learning tools.

According to experts, scammers used a similar name in the repository Open-OSS/privacy-filter, which contained a loader.py file that launched malicious code to steal data on Windows OS.

Source: HiddenLayer. The Python script included fake AI-related code to appear harmless. However, in the background, it disabled SSL key checks, decoded a URL pointing to an external resource, then extracted and executed a PowerShell command.

The code running invisibly downloaded a start.bat batch file. It elevated system privileges and downloaded the final payload, adding it to Microsoft Defender’s exclusions. The payload was a Rust-written infostiler capable of taking screenshots. It stole:

  • cookies, saved passwords, encryption keys, browsing history in Chromium and Gecko browsers;
  • Discord tokens, local databases, and master keys;
  • crypto wallets and their browser versions;
  • SSH, FTP, and VPN credentials and configs, including FileZilla;
  • system information.

Researchers noted that most of the 667 accounts liking the malicious repository appeared to be auto-generated. Additionally, the 244,000 downloads could have been artificially inflated.

“AI garbage” flooded hacker and cybercriminal platforms

In the dark web, complaints about “AI garbage” infiltrating discussions, guides, and technical posts are increasing. Wired reports this, citing research from Cambridge University and Strathclyde University.

Experts studied about 98,000 threads on hacker forums related to AI from the release of ChatGPT in 2022 until the end of 2025. During this period, attitudes toward generative models in cybercrime circles changed significantly.

According to the study, earlier hackers discussed how neural networks could help write malicious code or find vulnerabilities, but now they more often complain about a flood of “AI slop”: useless posts and primitive guides on basic topics.

Some forum participants are also unhappy that LLM responses in Google search results reduce traffic to the platforms themselves, negatively impacting hacker marketing.

However, researchers did not observe a significant impact of AI on the activities of inexperienced scammers. It has not lowered the entry barrier for newcomers nor caused drastic changes in the cybersecurity industry.

Belarus-linked hacker group attacked Ukrainian government agencies

In March 2026, a new campaign by the Ghostwriter hacking group (also known as UNC1151 and FrostyNeighbor), targeting Ukrainian government and defense structures, was recorded. ESET researchers reported this.

Ghostwriter, specializing in cyber espionage in Eastern Europe, is linked to Belarus.

According to experts, the hackers sent phishing PDF files mimicking documents from Ukrtelecom. Malicious links in the documents led to the download of PicassoLoader, which then deployed the popular Cobalt Strike attack tool.

The hackers used IP-based checks — the infected archive would only load if the victim was in Ukraine.

Researchers noted the group’s high “operational maturity.” PicassoLoader can send a “system fingerprint” to hacker servers every 10 minutes. Based on this data, Ghostwriter operators decide whether to continue attacking a specific target.

Unlike campaigns in Poland or Lithuania, where the group targets a broad range of sectors from logistics to healthcare, in Ukraine, their activity is focused solely on military and government sectors.

TeamPCP hackers threaten to sell Mistral AI repositories

The TeamPCP hacking group threatened to leak the source code of Mistral AI projects if no buyer is found for the stolen data. BleepingComputer reports this.

Mistral AI is a French AI company founded by former Google DeepMind and Meta researchers. It specializes in developing open-weight LLMs and proprietary software.

In a message on a hacker forum, the attackers demanded $25,000 for a package containing nearly 450 repositories.

Mistral AI officials confirmed to BleepingComputer that their code management system was compromised. The breach resulted from a large-scale attack on the Mini Shai-Hulud supply chain.

Mistral AI states that the affected data is not part of the core source code.

The attack reportedly unfolded in several stages. First, hackers gained access to official TanStack and Mistral AI packages using stolen CI/CD credentials. Then, the malicious campaign spread to hundreds of projects in npm and PyPI repositories, including developments by UiPath, Guardrails AI, and OpenSearch.

Mistral AI acknowledged that attackers briefly inserted malicious code into some SDK packages.

Source: BleepingComputer. The TeamPCP group claims to have downloaded nearly 5 GB of internal data used by Mistral for training, fine-tuning, testing, and experiments.

The hackers said they would release the data publicly if no buyer is found within a week.

Also on ForkLog:

  • Hackers withdrew $10 million from THORChain.
  • The Tether, TRON, and TRM Labs alliance froze $450 million in crypto assets.
  • Ethereum Foundation launched a service to prevent blind signing of transactions.
  • CertiK announced the “industrialization” of North Korean crypto thefts.
  • Roaring Kitty’s account was hacked for an RKC token dump.
  • Google reported increased AI popularity among cybercriminals.
  • LayerZero admitted mistakes after the Kelp hack of $292 million.

What to read this weekend?

ForkLog’s new article explores how the US Department of Defense’s main software contractor and Palantir Technologies “ensures obvious Western superiority.”

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned