AI Wealth-Building Guide: First Create Adult Content, Then Sell Courses

Author: Salad Dressing

Sexual desire is part of human nature. Most great business models have risen because of this, and AIGC is no exception.

A16Z, the top venture capital firm in Silicon Valley, released a report studying AI consumer trends. In this serious report on AI productivity, there’s a page with a laughably ironic line chart: last year, Americans spent more money on OnlyFans than on OpenAI and The New York Times combined.

A16Z report chart

It’s both satirical and true—productivity is still behind sexual tension.

So, how much can you really earn by pushing AI to the edge?

Image source: Giphy

Productivity vs. Sexual Tension

The first wave of AI virtual models knows this best.

Starting around late 2022, as tools like Midjourney and Stable Diffusion became capable of generating stable images, some realized: these tools can create hyper-realistic faces, produce in bulk, and cost almost nothing. They used AI to generate virtual female images, gave them names, personalities, and carefully crafted “daily life” stories, then operated them on Instagram and TikTok as if they were real people. Private messages with fans were handled by ChatGPT, offering a “girlfriend experience.” The entire process was almost fully automated, and the operators didn’t even need to show their faces.

Image source: Giphy

This approach thrived most on Fanvue, a competitor platform to OnlyFans. Fanvue is more lenient with AI content. According to official disclosures, by November 2023, AI virtual models contributed 15% of the platform’s total revenue. By 2024, top AI virtual models earned over $20,000 per month, with some mature accounts earning over $200,000 annually. This number continues to grow in 2025. Fanvue CEO Will Monange revealed in an interview that AI creators’ overall income increased by over 60% compared to 2024, and virtual models have become the fastest-growing content category on the platform.

Although OnlyFans officially bans AI content, some still find ways around it. On Reddit, discussions often revolve around how to use AI to push boundaries and make money on OnlyFans—common methods include having real women complete face verification, then using their photos to train AI models for mass content production.

Image source: Giphy

No matter how strict the platform, technology keeps advancing. Today’s AI-generated images are so realistic that even experienced users find it hard to tell the difference. Just days ago, I saw a video of a handsome AI guy sitting in a car on Xiaohongshu. If I hadn’t checked the comments and seen a pinned comment saying “This AI aesthetic is really good,” I wouldn’t have realized it was AI.

Beyond adult content, some people are making money with AI in completely different fields: children’s picture books.

Zhao Lei (pseudonym) was among the earliest to jump in. At the end of 2022, he was laid off from a big tech company’s product team and was exploring new opportunities at home. At that time, Midjourney had just become capable of stable image generation. Seeing watercolor-style animals, he had an idea: isn’t this just like illustration for children’s books? He spent two weeks researching Amazon KDP, with a very simple logic: ChatGPT writes stories, Midjourney creates images, he formats and uploads, then waits for the money. “It was really profitable back then,” he said. “A few books stacked up, and I could earn over ten thousand dollars in passive income per month.”

But the window didn’t stay open for long. In the second half of 2023, AI-generated children’s picture books exploded on KDP, with nearly 90,000 tutorials popping up on TikTok, all with similar titles: “EASY AI Money,” claiming you can make ten thousand dollars a month from children’s books.

Everyone rushed into the same lane, and sales quickly became diluted. Quality issues also surfaced—AI picture books started featuring dinosaurs with huge front legs, children with mismatched fingers, and other glitches. Major platforms began requiring uploads to declare whether AI was used, effectively ending this track. “It’s now very hard to make money with AI picture books,” Zhao Lei said.

Then he and others pushing AI to the edge all reached the same endpoint: selling courses (recently, the “Lobster” craze has taken this to the extreme).

Image source: Giphy

Zhao Lei sells a “Complete Guide to AI Children’s Book Publishing from Zero to Launch,” while those pushing boundaries sell “AI Virtual Model Building Tutorials.” Both target newcomers who just heard about this and still think the window is open.

Two tracks, different packaging, but selling the same thing: an illusion that “I can do it too.”

Aesthetic and “Old Skills” Trap Many

These may seem like easy money at the cutting edge, but what are the real barriers?

An internet UX designer friend once told me: network restrictions and membership fees. She wrote a guide on how to use Midjourney when it first launched, selling it for 99 yuan, and it’s still earning her passive income on Xiaohongshu. From a tool usage perspective, she’s right—the barriers are indeed dropping rapidly.

But as someone whose drawing skills are limited to matchstick figures and who often produces ugly images with various AIGC tools, I need to add something she didn’t mention: there’s also an aesthetic barrier.

Image source: Giphy

People used to joke that AI can’t replace designers because clients don’t really know what they want. I thought it was a joke until I personally used these tools and found that the joke was exactly true for me.

Last year, I created a media account and wanted to design a logo based on the “Cumulative Island” concept—an idea that represents what’s worth settling on amid chaotic information flows. I found reference images, uploaded them into the tool, added a bunch of descriptive prompts, and started generating images. The results were a mess. I revised it seven or eight times, each time changing the approach. I knew I wanted a certain feeling, but I had no idea how to translate that feeling into precise instructions. In the end, I asked a designer friend for help. She spent twenty minutes, and her version was completely different from my two-hour effort.

[Image: Before and after editing]

The problem wasn’t the tool; it was me. More precisely, I couldn’t turn my vague aesthetic feelings into clear language.

This isn’t just my problem.

A content creator friend started using Seedance for short videos last year. She quickly learned the tool but got stuck on writing storyboards. “I know I want a textured look, but putting ‘textured’ into prompts doesn’t do anything,” she said. “I don’t know what kind of lighting, shot angle, or camera movement that means.” The final product she made was “kind of right but not quite.”

Another friend used Marble, a tool that generates 3D scenes from text and images. After repeatedly generating and discarding images, he realized he had no reference frame—he didn’t know what “good” looked like, so he couldn’t judge whether the results matched his vision.

[Image: 3D scene generated by Marble]

In stark contrast, a friend with photography experience, using the same tool, produced much higher-quality images. He said he didn’t spend much time on prompt techniques—just knew what composition and lighting he wanted, and clearly communicated that, so the tool delivered accurately.

The capabilities of tools are rapidly improving, but the skill gap among users isn’t narrowing—in some ways, it’s widening. Previously, no one could produce good work; now, those with aesthetic experience can create excellent results, while others still struggle between “usable” and “good.”

Tools are also responding to this reality. The rise of one-click template tools like NotebookLM is based on a simple logic: they bypass the need to know what you want. Templates make aesthetic decisions for you; you just fill in the content. But that’s also their limit—they solve “usable,” but not “beautiful.”

This is equally true in writing. I have a friend in marketing who was recently assigned to PR. She needs to produce a large volume of texts. Her boss suggested using AI, but she was more confused. She asked me for a copy of her previous AI writing manual. The core issue: she doesn’t have a sense of what makes a “good PR article,” doesn’t know the standards, and can’t judge the AI-generated content to know how to improve it.

Image source: Giphy

In contrast, I find AI writing much easier. Not because I understand the tools better, but because I’ve been a journalist for years. I can judge expression, know what makes a sentence good or awkward, and see where AI falls short and how to push it further. Aesthetic sense here becomes a practical skill: it shows you the destination, rather than letting AI run aimlessly.

When tool capabilities are no longer the main issue, aesthetic and “old skills” become the biggest barriers—using poor skills, or none at all, is sometimes even worse than not using tools.

Do I care about the “sexiness” of AI versus real people?

The first to explore new territory not only reaps benefits but also faces controversy. Now, in the AIGC scene, there’s a paradox: whether to use AI or not is less important than whether the work is good.

Fang Yuan (pseudonym), a brand designer, took on a branding project. Using AI tools, he compressed a two-week process into three days. He thought the results were even better than before. He sent the work out and waited for feedback.

The first reply wasn’t praise but: “So fast, did you use AI?” Before he could respond, another message came: “We don’t accept designs involving AI.” He’s still unsure if they even opened the attachment. He’s frustrated—his efficiency was too high, and now it’s a fault.

Image source: Giphy

He’s not alone in this. AI has quietly become a moral judgment in many people’s evaluation systems. Unlike Photoshop or Excel, no one asks “Did you use retouching software?” when they see a photoshopped picture, nor do they question “Did you use Excel for this financial report?”

AI triggers a different suspicion—closer to “Did you really do this?”

In creative work, there’s an implicit contract: good work means someone has invested time, effort, and refinement. But AI’s emergence breaks the causal link between “effort” and “output” that everyone assumed existed.

A piece made in three days with AI, compared to one handcrafted over two weeks, even if the quality is similar, will feel “off.” This “off” feeling can be summarized as “unfair.”

A study by the University of Arizona found that if designers explicitly tell clients they used AI assistance—even if they clarify that AI was only an aid—trust in the designer drops by an average of 20%.

As AIGC technology matures, this issue shifts from individual trust to platform regulation.

Since 2023, countries have introduced regulations requiring AI-generated content to be labeled: first, the “Deep Synthesis Management Regulations” in January, mainly targeting AI face-swapping and voice synthesis; then, in August, the “Provisional Measures for Managing Generative Artificial Intelligence Services,” which included services like ChatGPT. By March 2025, regulatory oversight intensified with the “Artificial Intelligence Generated Content Labeling Measures,” covering text, images, audio, and video.

But what’s hard to define is the boundary.

Platforms can identify a 100% AI-generated video, but it’s difficult to judge borderline cases—like a selfie edited in AI for color and composition—is that AI content? If a video uses self-shot footage but is edited and scored by AI, should it be labeled? If an article’s first draft was AI-generated, then edited by a human, who owns the label?

[Image: Giphy]

The core issue behind the boundary problem is responsibility. Without clear definitions, accountability is unclear. If a song’s melody is AI-composed, with lyrics rewritten by a human, who’s responsible for copyright disputes? Or if a review is AI-generated, with only tone adjusted by the blogger, and the product turns out to be disappointing, asking “Was it AI?” actually raises a more fundamental question: is there someone genuinely responsible behind this work? Is anyone thinking about the consequences? Does anyone care about the results?

The hardest part isn’t defining the boundary—it’s assigning responsibility.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin