Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
When a company becomes an Agent: 5 reflections on organizing in the AI era
“AI Pioneers Research” is a deep dialogue series by the AI Lens. Transition is not a linear evolution but a overthrow, iteration, and reconstruction. Each issue invites an AI pioneer—an AI-native entrepreneur, a top executive transforming their enterprise with AI, or a super individual redoing themselves with AI—to share their conclusions, the process of dismantling, the logic of building, the tuition paid, and what they hold onto amidst continuous change, providing real references for others walking this path.
In the first issue, we invited Dr. Fan Ling, founder and CEO of Tezign, professor at Tongji University, doctoral supervisor, and director of the Artificial Intelligence Design Laboratory. Tezign is a 10-year-old enterprise-level Agentic AI company dedicated to building enterprise-level intelligent agents based on their self-developed Generative Enterprise Agent (GEA) architecture, capable of understanding business contexts, participating in complex decision-making, and continuously driving results. It helps enterprises achieve growth, innovation, and productivity improvements, serving long-term user insights, product innovation, and marketing growth. More importantly, it is also redoing its organization with AI: from pod transformations, community cultivation, to building Generative Enterprise Agents with skills, context, and orchestration.
We discussed with Dr. Fan how AI is changing organizational structures, talent density, client delivery, and product barriers, as well as the unresolved issues behind these changes.
【Insightful Quotes】
“AI is not just a tool to improve R&D efficiency, but an Agent that helps those lacking R&D resources access them.”
“Fundamentally, AI is anti-industry and anti-profession division—it brings us back to a Renaissance-like state of all-round capability.”
“Leadership, ownership, responsibility, resilience—these seemingly intangible qualities—become very concrete in the AI era.”
“Most companies are still at the copilot stage: adding AI to existing functions. But AI capabilities have advanced to the point where organizations can be redesigned around AI.”
“AI-native organizations are not about embedding AI into human workflows, but embedding human judgment into AI workflows. A company can be an Agent, with people serving as judgment providers within these Agents.”
“We are in an era of product surplus and user scarcity. Growth will become increasingly difficult and more critical. In the AI era, focus should be on those capabilities that AI cannot compress in time.”
Research Guest Introduction:
Dr. Fan Ling, founder and CEO of Tezign, professor at Tongji University, doctoral supervisor, and director of the AI Design Laboratory. Tezign has been using AI technology for 10 years to help enterprises solve problems related to user insights, product innovation, and marketing growth. In this dialogue, he shared internal pod organizational changes, AI tool dogfooding, enterprise context systems, and explorations of products like Atypica / GEA.
Yu Yi, senior researcher at Tencent Research Institute, focuses on AI-native product innovation and corporate transformation, with years of experience in venture capital and ecosystem incubation. Recognized as an annual expert in LinkedIn China, Tencent’s AI outstanding expert and sharer, and mentor in AI learning circles.
【Research Summary】Tezign’s AI Native Organization Experiment
Change Trigger: Cursor’s Best Use Is Not R&D
Dr. Fan Ling has been observing who uses AI tools best within the company. The answer was surprising—not R&D, but product managers and designers. They used Cursor to access resources that previously required scheduling R&D. This made Fan realize that AI is not just about speeding up specialized divisions but about enabling a single person to cross multiple roles. The organizational assumption of “one person, one post, hierarchical promotion” since the Industrial Revolution is being fundamentally shaken. He calls this “AI as the anti-industrial revolution.”
Organizational Reshaping: Pod + Community Dual Track
Based on this insight, Fan Ling made two organizational moves. First, dividing the entire company into pods—cross-functional teams of 3 to 10 people, delivering internally closed-loop work, no longer relying on cross-department coordination. Three years ago, Tezign tried a pod once and failed because people weren’t psychologically ready. But AI has spontaneously reduced horizontal resource allocation, and the soil for pods has matured. Second, outside the pods, building communities: lateral communities help everyone develop cross-border skills like sales, product, coding, and a dedicated Leadership community—Fan Ling believes that in the AI era, leading 100 Agents is harder than leading 10 people; pod leaders need not only AI skills but also P&L, business intuition, and patience.
A phenomenon accompanying organizational change is the dissolution of role boundaries. Marketers start using Claude Code to script LinkedIn contacts, effectively becoming Marketing Engineers; product managers and designers directly produce features with Cursor, no longer waiting for R&D scheduling. R&D proportion continues to decline from 50%, but the number of people who can code has increased.
Cultural Engine: Founders’ Hands-On Build
Organizational structure is just the skeleton; what truly makes AI run is culture. Tezign also conducts systematic training (ABC Plus project), but Fan Ling finds that a more effective driver is the founders personally stepping in. He and the CTO and product lead form a small team, using AI to develop new products, which results in user growth far exceeding other teams of seven people. Demoing during lunch or coffee, other pod leaders follow suit. Over time, a habit of “proudly showcasing what we build” forms internally. This dogfooding culture’s contagious power far exceeds top-down promotion.
Infrastructure: Layered Context System
Tezign is a company with a deep document culture, even turning meeting recordings into documents. Fan Ling is building a layered context system: at the company level, schemas like schema.md serve as guiding documents, acting as indexes pointing to hundreds of millions of accumulated documents; at the pod level, each team has its own context; at the personal level, each individual manages their dialogues and preferences. He emphasizes that more context isn’t always better—some scenarios require frameworks rather than details. Enterprise-level contexts must also handle permissions and confidentiality. Fan Ling gives an example: when projecting a screen, he searches for Wi-Fi passwords, and AI inadvertently also retrieves confidential passwords only he has access to. He prefers not to put some core data into the context system at all.
Product Sharing: GEA, Atypica
On the product side, Tezign’s layout revolves around a core logic: accumulating things that AI cannot compress in time.
GEA (Generative Enterprise Agent) is an enterprise-level Agent architecture. It doesn’t focus on a single Agent but emphasizes Context and Orchestration—one Lead Agent with several Sub-Agents, equipped with enterprise Skills and Context, forming dedicated project teams in user insights, content growth, product innovation, etc., operating 24/7 as a virtual company.
Atypica focuses on “AI that understands humans.” Based on about 1 million expressions, stories, cognitions, and behaviors from real users, it builds subjective world models to simulate consumers and professionals. A typical case: a U.S. professor used 20k real household samples to generate 1,000 typical personas, simulating family discussions around fertility, then injecting policy variables to observe behavioral changes—applying AI to social science research.
Game Lab (game.atypica.ai) addresses the accuracy of AI simulation of humans. It lets real people and AI play the same economic games (e.g., trolley problem, ultimatum game), using real data to fine-tune AI until its decisions are indistinguishable from humans. This is core to Tezign’s evals.
Business Closed Loop: Scenario Discovery Drives Customer Dialogue
After internal pod deployment, external strategies also evolved. Pod leaders become “Scenario Discovery Officers”: using AI to extract about 100 common scenarios from over 600 customer needs, then structuring them with SPIS (Situation-Pain-Impact-Solution). Talking about “others’ pain points” makes conversations easier than product demos—customers feel “you understand me.” About 30-40% of a pod leader’s time is spent on scenario collection.
Cost and Unresolved Tensions
The most skilled at AI are often the most exhausted—capability expansion leads talented individuals to take on more, increasing fatigue. Fan Ling admits this is the biggest unresolved issue. As everyone can build, repetitive work increases, with multiple versions of the same scenario library created by different pods. He tolerates rather than controls this. There’s a gap between demos and production evals; without real physical scenarios and unique data, relying solely on model upgrades makes barriers fragile.
Deeper tension: AI lowers exploration costs but also increases anxiety. Product surplus, user scarcity, demos are easy, but growth is the scarce resource. Fan Ling’s approach: building in public—building while gathering feedback, making exploration itself part of the result.
Full Interview
Organizational Restructuring: From Copilot to High-Coherence Pod Model
Yu Yi: Let’s start with what I find most interesting—“AI organizational transformation.” As far as I know, Tezign started experimenting with organizational changes early on. About a year ago, Tencent Research Institute published an interview and case study on Tezign, discussing some internal attempts. It’s been a year, so I want to ask about the latest progress. I remember you mentioned restarting pod (small cross-functional teams) transformations, and this is the second attempt. Many people might not fully understand what a pod is. It’s actually an organizational architecture change, originally initiated by Meta. To me, it’s like forming a “special forces” unit within a company to quickly grasp new environments or technologies. Is my understanding correct? I was very impressed when you said the first pod attempt failed because people weren’t psychologically ready, but now, with AI, it seems feasible again, so you’re trying again. Could you share why you chose this structure back then? How is it progressing now? I’m very curious.
Fan Ling: I’ve been using various AI tools for a long time, since we’re an AI product company ourselves. During this process, I kept asking myself: to what extent do we truly realize the value of AI? Previously, if a tool could improve efficiency by 20-30%, it was considered very worthwhile. But for AI, is a 20-30% efficiency gain really good use or not? I didn’t have a clear standard at first.
Later, I had an Aha Moment—though it might sound trivial now, it was a huge shock at the time. Because I care about efficiency, I asked my team about Cursor (an AI coding tool). I found that the most creative users of Cursor weren’t R&D, but product managers and designers. They used Cursor to directly access resources that previously required scheduling R&D. For them, Cursor itself became a form of R&D resource.
This made me realize: AI isn’t just a tool to help R&D speed up; it’s an Agent that helps “people lacking R&D resources” solve problems. AI’s emergence makes us more versatile, not just more specialized or competitive.
This led me to think about organization. Our current organizational structure, even our education system, are designed based on the logic of the Industrial Revolution: the so-called “thousand industries,” where you pick a profession, advance from junior to senior. Many say AI is the next industrial revolution, but I think AI at its core is “anti-industrial revolution”—it breaks down industry and profession boundaries, bringing us back to a Renaissance-like state—an individual can be a “full-stack person.” If our product managers and designers can become more versatile with AI tools, we might no longer need rigid role divisions. Instead of one person, one role, now “people + AI” can play multiple roles simultaneously.
Yu Yi: How do you implement this organizational judgment?
Fan Ling: That’s our fundamental belief. Its biggest benefit is greatly reducing meetings for alignment and synchronization, and shortening front-end/back-end coordination time. Based on this, we pursue a state of “high cohesion, low coupling.” High cohesion means small teams—maybe two or three people—who can deliver a task internally without relying on cross-department communication; low coupling means departments don’t need much cross-department coordination, everyone can work independently, saving ineffective meetings.
In organizational form, that’s the pod. We tried implementing pods three years ago. We explained it extensively, but ultimately reverted to traditional role-based division because people weren’t psychologically ready.
But last year, with the Aha Moment I mentioned, I saw teams spontaneously reducing lateral resource allocation, relying on AI to deepen their work. I think the opportunity for pods has returned. I personally dislike meetings; I’m a builder at heart, preferring to do things myself rather than waste time in meetings. At that time, someone told me OpenAI’s organizational structure is called a pod, and GPT was built by 35 pods. I thought it was great—this confirmed the direction was right. When teams become small operational units, everything changes.
Yu Yi: Dividing teams into small units sounds very ideal.
Fan Ling: The smaller the unit, the stronger the ownership. Previously, if you were a front-end developer, you only handled front-end code, not the backend database, and didn’t need to care much about the final product experience. But now, in a 3-5 person pod, everyone must work together to deliver the final result. This actually helps improve user experience and product quality.
So we are pushing towards pods again. This time, we’re much more mature—people’s psychological readiness and skills are aligned. But implementing pods also has two side effects. The first is straightforward: the better someone uses AI, the more functions they take on, and the more exhausted they become. I often say, if you walk around the office at night, you’ll find that those working overtime are not the low-efficiency ones, but the high-efficiency ones.
Yu Yi: Yes, I often tell colleagues, “Why aren’t you leaving yet?” It’s because they’re efficient and can handle multiple tasks simultaneously.
Fan Ling: Exactly. First, they can handle many tasks in parallel; second, they unconsciously extend task boundaries outward. Originally, you only wrote front-end code; now, with AI, you might also do product design; after finishing the product, you might even handle Go-to-Market strategies yourself.
For example, yesterday, a colleague from our marketing team showed me his website CMS system. Previously, it might have taken a team of 20 to develop; now, he did it alone with AI. I thought it was amazing. But the result is, these people are even more exhausted. AI didn’t make them easier; it made them work harder. They are very quality-conscious and picky. Previously, they might only reach 60 points; now, with AI, they realize they can push to 90 points, so they work day and night. That’s the first side effect we need to address.
Yu Yi: That’s indeed a happy problem. What about the second side effect?
Fan Ling: The second is that we realize we can’t rely solely on spontaneous exploration; we need to “get on the horse and send it off,” providing systematic training. So last year, we launched a program called ABC Plus (AI Builders and Creators Plus). We invited professional trainers to teach the team how to use Claude Code and various Agent tools. Only when we help them overcome the initial cognitive burden can they truly leverage the tools.
Alongside the pod operational unit, we started building a parallel organization called Community. In traditional structures, engineers, product managers, and sales are managed as solid lines; now, we turn these into dotted-line functions—Communities. The main role of a Community leader is to help more people learn sales, coding, and product skills. It’s no longer about hiring with a “sales” title and only tracking KPIs; we want to help everyone become more well-rounded.
So, the current structure is: the solid-line operational units are pods—some serve clients, some focus on R&D; the lateral, dotted-line organization is the Community, responsible for helping people grow, not for KPI assessment. Besides the coding and product Communities, we also established a Leadership Community.
Because I found that in the AI era, one person might need to manage 100 Agents; or as a manager, lead 100 employees proficient in using Agents. In this context, qualities like leadership, ownership, responsibility, and resilience—once seen as “virtual”—become the most concrete and core hard skills.
Yu Yi: So, Tezign’s pod model draws from the experience of companies like OpenAI, but it’s not a direct copy of “special forces.” Instead, it’s a fundamental principle for the whole company, combined with a cultivation mechanism like Community, creating a customized variant.
Fan Ling: Actually, we don’t know exactly who we learned from; if they call it a pod, we call it a pod too. The underlying logic is similar—transforming hierarchy into circles. We’re also interested in the concept of “green organizations,” and Tezign’s temperament aligns well. It’s just that in the past, everyone’s ability wasn’t enough; now, with AI, capabilities finally catch up. This adjustment is a whole-company change. We just finished the Q1 review; everyone feels uncomfortable but understands we can’t go back. I guess by Q2, everyone will feel a bit more comfortable.
Yu Yi: Where does this discomfort mainly come from?
Fan Ling: From many aspects. In the past, a project involved a group of over 20 people, each with clear responsibilities. If something wasn’t done, it was easy to blame someone’s scope. But in a pod, even with just three or four people, everyone must cover for each other. If something isn’t done well, no one can pass the responsibility; control becomes stronger, but pressure also increases.
Also, in Q1, we encountered issues. For example, if one pod is overwhelmed, should another pod help? Currently, everyone isn’t omnipotent; some pods are stronger in R&D, others in sales. Should cross-group collaboration happen? That triggers coordination issues. So, at this stage, everyone feels uncomfortable, and pod leaders face great pressure. But I believe this direction is correct.
Yu Yi: I’m curious—after becoming a pod, is there a clear size limit for the operational units? For example, is there a maximum number of people per pod? Or do teams naturally grow to the most efficient size?
Fan Ling: I personally prefer pods to be as small as possible. But objectively, there aren’t enough people capable of leading pods at the moment, so our pods tend to be around 10 people, possibly split into two subgroups.
Yu Yi: I’m quite interested in this part. I also asked Gong Yin from Anker; he thinks a good operational unit is about 6 people—meaning a “big shot” who understands both business and AI, leading five people who can use AI. That’s his view on optimal size.
Fan Ling: I think the principle is, as long as it’s small enough, it’s fine. My view is, if one person can do it alone, no need for two; only if it’s impossible, then up to 10. I don’t believe in a “magic number” like 5 or 6. The core logic is to keep the unit as closed and small as possible, giving everyone as complete ownership as possible.
One point I want to emphasize: I don’t aim to compress team “man-hours” (workload). For example, if a task normally takes 100 person-days, I won’t force you to finish it in 30 days just because you only have 3 people. Everyone will optimize workflows themselves. I care more about whether we can eliminate low-efficiency, tedious horizontal alignment and re-approval processes. The time saved comes from reducing inefficiencies, not from squeezing individual workloads.
Cultivating AI-Native Leaders and Business Perspectives
Yu Yi: You mentioned that currently, only a few can be pod leaders. I have two curiosities: first, what kind of people do you think can become pod leaders? Second, what mechanisms does Tezign use to identify and cultivate pod leaders?
Fan Ling: Who can be a pod leader? Besides the soft skills everyone agrees on (like AI skills, learning ability), I think some hard skills are very hard to develop but essential—like leadership, P&L awareness—even if you come from R&D, you must understand P&L; and a strong sense of responsibility.
Also, in this fast-changing environment, “patience” has become a very important quality. These are a combination of soft and hard skills. As for AI tool proficiency, I believe anyone who can become a pod leader won’t be lacking in this area.
How to cultivate? That’s why we added a Leadership module in the Community. We’re not very systematic yet; maybe in the early stage, we’ll offer leadership training or send them to business schools. In the past, we trained team leads mainly on managing their teams; but now, these team leads manage not just a few people but hundreds of Agents. They are managing a capacity of 100 people. So, I think everyone should take business and leadership courses. After all, in the AI era, technical skills are learned daily, but business thinking and management are often the shortboards.
Yu Yi: So, for AI-native talents, the key to becoming a leader of a “people + AI” team is to learn business and leadership skills.
Fan Ling: It depends on the team. For example, some traditional industry clients don’t lack methodology or leadership; they’re used to managing thousands of people. For them, the main shift is mindset—moving from rigid SOPs to AI-native thinking. But for a tech company like Tezign, AI has made management and planning skills even more important.
Yu Yi: That’s very interesting. I recently read a Harvard paper that studied over 500 companies with about 100 days of accelerated training. They had a control group: one followed traditional business training; the other, starting from the second or third week, incorporated external AI case studies. The group with AI case studies showed significant improvements in subsequent funding and results. It shows that the training approach and content are crucial.
Fan Ling: Exactly. AI tools actually help many business leaders see possibilities for implementation. Previously, many business theories stayed at the “talking” level, lacking practical means; but now, AI makes many ideas truly implementable. I see AI as a “business technology.”
Just like the previous generation of ERP or CRM systems, which are tools for implementing business thinking, every business theory needs technology to carry it. Conversely, AI is an opportunity for many business ideas that were only theoretical to flourish.
Yu Yi: I want to ask a related research question. Some argue that past technological revolutions had clear input-output efficiency mappings, showing definite efficiency gains; but AI seems to show obvious dividends at the individual level, while at the organizational level, this mapping breaks down. A paper also aims to verify whether this mapping still exists with data. Dr. Fan, have you observed differences in efficiency gains from individual to organization? Is it because cost reduction and efficiency are just imaginative spaces, or are there other reasons?
Fan Ling: We haven’t deliberately quantified organizational changes, but results speak for themselves. For example, our team size hasn’t changed, but last year’s business grew by 60%, and this year’s target is 80-90%. That’s enough to prove AI’s value.
But as I said, some things can’t be quantified by data. For instance, our leaders are truly exhausted. Even with AI helping them do a lot, their mental load has increased. If you work quietly with headphones, focusing on coding, your work is less interrupted; but now, a leader manages 6 or 10 Agents, with many people running below. The mental burden is huge and hard to quantify.
Also, I think, although AI is changing rapidly, human change is slow. Two years ago, AI’s main product was Copilot—an assistant for each person, improving individual efficiency, but organizational structure didn’t change. Last year, inference models like DeepSeek-R1, GPT-1 emerged, and AI started to plan tasks. This year marks the dawn of Agentic AI—AI truly starts doing work.
Today, most companies’ organizational structures still resemble the Copilot stage from two years ago—adding some AI tools to functions for efficiency. But in fact, AI capabilities have reached a point where you can redesign organizations around AI. The company itself can be an Agent, with people providing judgment within this Agent.
There are many buzzwords now, like hanging a few AIs in the org chart. But that’s still “using a few AIs to do work within a human organization.” Have you considered redesigning the organization based on AI logic? For example, last year, we helped a company build a 7×24-hour R&D Agent. Previously, humans initiated projects, and AI helped with drawing, research, etc., but humans controlled the rhythm. But what if AI could run 24/7, only pausing to ask your opinion? If you don’t reply in 10 minutes, it proceeds to the next task. That’s a completely different rhythm.
We’re working on a product called GEA (Generative Enterprise Agent), which is about reshaping company workflows in product development, marketing, etc., with AI. Instead of embedding AI into human workflows, it’s about embedding human judgment into AI workflows. An AI-native organization isn’t just “everyone using AI,” but reconstructing the organization with AI logic.
Yu Yi: I strongly agree. If it’s still at the Copilot stage, it’s definitely not an AI-native organization. I even think the so-called “AI-native organization” last year didn’t really exist; it was just a productivity boost for individuals. After I started doing web coding intensively, I see my relationship with Agents as managing a “company of 100 people.”
The biggest feeling is, humans can’t always “coexist” with AI because if they do, they get exhausted. It’s obvious that AI isn’t a species like us; its work rhythm is faster than humans can follow. So I started using asynchronous communication, email, calendars, and other mechanisms to manage Agents.
Many discussions (like Block’s views) are exploring this new organizational hierarchy. Since AI can produce output itself, connecting AI output with market needs becomes the most important thing. This is even more critical than “how humans and AI collaborate.”
Fan Ling: Of course. I’ve always believed: if your company is small and not very innovative, your accumulated experience might become a burden in embracing AI. You might really lose to a lean startup with just two or three people. Though it’s a warning, it’s not inevitable; many large companies feel this urgency and are actively transforming. Big companies might turn around quickly.
This year, I hear the tone shifting from “AI is interesting” to “We have problems—how to lay off staff?” “How to restructure organizations?” “How to solve data security?” Everyone is testing small, but the willingness to act is strong. Last year, most discussions were just talking about AI in videos; this year, many brands, much larger than Tezign, are asking the same questions, even explicitly planning to reorganize their AI Centers.
Yu Yi: I also deeply feel that entrepreneurs are eager to act now. Silicon Valley is also very active. Could it be that the emergence of products like Xiaolongxia (small lobsters) makes everyone see the tangible results of AI work? Previously, using Claude Code, bosses might not feel the “machine substitution” strongly; but now, they suddenly realize they can directly command AI to do things, and with their experience, the results might even surpass their subordinates’. So they’re eager to get started.
Fan Ling: Yes, every revolutionary product tends to pull more people in. There are also some less-discussed points. For example, should we formalize corporate SOPs and capabilities into Skills, so Agents can invoke them? Jack Dorsey (Block founder) mentioned in an article that every enterprise has a “world model,” a shared Context System.
In recent months, building a company-specific knowledge base or context system is still very heavy—much more difficult than generating ideas. Making everyone experience commanding computers is easy, but transforming that into a system that creates value for the entire enterprise requires a lot of Harness Engineering and Context System construction. It’s a huge investment.
In the Chinese context, a less-discussed point is Evals (evaluation/validation). Overseas, I see this as a huge market. Moving from POC (proof of concept) to Production, most companies need Evals. Most clients haven’t invested much in Evals; they just try tools for competitive research or new product innovation, seeing potential. But to ensure the whole company’s safe, stable use and value creation, an Evals system is essential.
Finding Scenario and Balance: Tensions in Exploration
Yu Yi: Last week, I talked with colleagues in Hangzhou about this model. I really like a framework proposed by a professor: the first layer is strong support from Leaders; the second layer is enabling ordinary people to use it; the third is establishing a Lab to identify excellent use cases and turn them into products or schemes for company-wide promotion.
For example, I like a native AI company called Every. Although they only have a dozen people, they set a role called “AI Discovery Operations Officer,” who weekly discusses pain points and AI usage with the CEO and team, consolidating experience for company-wide benefit. I wonder if Tezign has a similar role or new structure?
Fan Ling: In our case, the pod leader actually plays the role of “Chief AI Scenario Discovery Officer.” Tezign has a strong document culture. Now, even meeting recordings are turned into documents. A typical example is using OpenClaw to read these massive documents and summarize customer scenarios.
In the past three months, we received over 600 customer needs. AI helped us extract about 100 common scenarios. We used a methodology called SPIS: Situation (current state), Pain (pain points), Impact (effects after GEA), Solution (corresponding schemes). After AI sorted out these 20+ core SPI points, we did two things:
First, shared with customers to verify if these scenarios are generalizable;
Second, used these pain points to ask customers if they have new scenarios to contribute.
Previously, we made demos based on product features; now, we use “others’ pain points” to chat. Customers share more because they feel “you understand me.” Then we tell them how AI can solve these issues. So, scenario collection is very important; our pod leader spends about 30-40% of their time on this.
Yu Yi: It sounds like your scenario collection is more outward-facing. How about internal software skills or employee routines (the so-called “eating your own dog food”)?
Fan Ling: Last year, we developed new products mainly to “eat our own dog food,” personally building Vibe Coding. I, the CTO, and the product lead formed a small team, coding ourselves. The user growth and business increase from this small team far exceeded that of other seven people.
As the most curious about AI, we try all kinds of things. After trying, I don’t just rely on external trainers; I must produce something myself and demo it to the team.
Yu Yi: Your case is often shared externally—an example of founders doing weekend projects and reaping AI coding dividends.
Fan Ling: Later, I realized I’m not special. Many founders in tech or product fields, like those at Shopify, Intercom, Airtable, said last year: “I’ve never written so much code as last year.”
If we believe management no longer relies on “big meetings,” then I’ll spend every lunch, dinner, coffee time showing different people what I built. Recently, I showed how I built my own Wiki with AI. When there are tangible results, others follow suit. My pod leader shows me demos they built for clients; the marketing lead shows her CMS. Everyone develops a habit of “proudly showcasing what they build.”
The current problem isn’t doing less but building redundantly. For example, each team wants its own scenario library because coding costs are low. But I prefer this informal approach to encourage experimentation rather than high-pressure pushing. On one hand, it helps absorb different ideas; on the other, AI changes so fast I don’t have full confidence. I just act as a “big bluff” to encourage teams to try different directions. When better tools come out, we’ll see what’s optimal.
Yu Yi: That’s a happy problem! Speaking of which, I’m working on a series called One Question, exploring AI’s real issues with different people. Recently, I focus on “Tensions in AI Era”—how to balance “certainty” and “exploration.”
This is not only organizational but also personal. I once immersed myself in exploration, reaching 80 points on a task, just needing one more push for full delivery, then switching to a new exploration. AI changes so fast, full of unknowns and possibilities. If you don’t explore, you’ll lose opportunities; but if you only explore without certainty, the company and individual can’t survive. Dr. Fan, as a fellow explorer, how do you manage this tension?
Fan Ling: Anxiety is inevitable. When anxious, just watch fewer videos of Dr. Xiaohui and me! All AI-related content creates anxiety, making you feel others have better methods. But after talking to outsiders, you realize the world is still a makeshift stage. People are panicking, but few are actually good at using tools and running processes.
So, in the AI era, don’t over-compare yourself; don’t just focus on “cost reduction and efficiency,” but think about blue ocean markets you couldn’t reach before. Recently, I visited the U.S. to see products and clients. I found that American models are indeed good, but some applications are poorly done. Meanwhile, in China’s red ocean, AI applications are very refined. If you look overseas, like in Singapore, peers think Chinese AI applications are very advanced.
This is survivor bias. Your surroundings are full of proactive AI enthusiasts, so you feel behind every day; but look at the blue ocean, and you’ll see your work still has great value. Don’t just focus on cost-cutting; explore incremental markets. As for personal anxiety, sleep well—turn off the computer when needed.
Yu Yi: I’ve been adjusting my sleep recently, feeling quite headachey. Let’s bring the topic back—since AI helps us explore a lot, how do we ensure business results are certain? How to balance exploration and certainty?
Fan Ling: That’s a tough question; I haven’t done well myself. A popular approach now is Build in Public—build while gathering external feedback. Maybe it’s a compromise, making the exploration process part of the result.
Also, I want to say, we’re in an era of product surplus and user scarcity. If a company still brags “we’re a tech company, 90% R&D,” I’d think: your investment in sales and growth is definitely insufficient. Previously, a founder coding himself and leading a 10-person marketing team might be the new normal for AI companies. Demo quality is easier, but growth is harder. So, if you run a full company, you need to spend much more on growth than before.
Yu Yi: What’s Tezign’s current staffing ratio? Last time, I talked with Zhang Haoyang from EvoMap; he said his team, besides himself coding and burning tokens, everyone else is in marketing and growth. That seems to represent a new normal.
Fan Ling: This change is real and two-way. On one hand, the proportion of traditional R&D staff is definitely decreasing. We used to have 50% R&D, 50% business; now, the exact ratio isn’t clear, but R&D is definitely less.
On the other hand, those doing marketing are becoming more like engineers. For example, they want to introduce products on LinkedIn, using Claude Code to write scripts to collect contacts. This used to be only R&D’s job; now, marketing does it themselves. So, their mindset and methods are becoming more engineering-like. I haven’t officially renamed them “Marketing Engineers,” but practically, they are.
Yu Yi: Indeed, I feel I should add “Builder” to my title too, even though I’m not a formal coder. The biggest change in web development is that I no longer see code as a barrier but as a callable capability. That’s a fundamental shift.
Fan Ling: Yes, on one hand, Everyone is an engineer; on the other, the proportion of traditional engineers will decrease.
Building Enterprise Context Systems and the Compound Effect of AI
Yu Yi: Very interesting. We’ve discussed organizational topics, now let’s move to Tezign’s practice. Tezign is a company with a strong document, context, and data culture. Everyone agrees that for non-large-model companies, the core of delivery isn’t tokens but intellectual assets and professional contexts.
In the AI era, as AI becomes a “digital employee” entering organizations, how does Tezign build and manage the company’s context? How is communication between humans and AI established?
Fan Ling: We’ve recently been building a self-updating context system. Previously, our approach was like a complex Wiki or knowledge base; but now I think we should go back to basics.
It’s just a simple folder system, containing Guidance documents, which we call schema.md, to set basic principles. As long as the context system is simple and clear enough, it can act like an index pointing to hundreds of millions of documents we’ve accumulated. Each pod can build its own dedicated context based on these rules, without needing full company sharing. On top of that, there’s personal context—my habit is to unify dialogue contexts across different models.
This creates a three-layered structure: personal, pod, and company. I once tried to make the system update contexts in real-time, but it cost me 100k RMB quickly! Now, I don’t update in real-time; the company-level context updates weekly. But I have a document hygiene habit—my own context is cleaned daily.
Yu Yi: I agree! I’ve also spent a lot of time building my own file system, going through a very painful period. I even did an experiment: I have a folder, AI has a folder, and I don’t intervene; I see what it does. Recently, I restructured the system with it. It involves how to compress and transmit context, which is very different from before.
Fan Ling: Yes, the essence of context is like an index based on reading habits. I think the difficulty isn’t personal context but team and company contexts, because they involve permissions and confidentiality. Tezign spent a lot of time on this. For example, once I searched for Wi-Fi passwords during a presentation, and AI also retrieved confidential passwords only I could see—very embarrassing. So, managing context and permissions is extremely complex and core. I even tend to not put some core confidential data into the context system at all.
Yu Yi: I saw a very interesting approach: some companies write their cultural values, acceptance standards, even the CEO’s core principles into a System Prompt that everyone must invoke