Sam Altman's latest exclusive interview confession: Actually, I don't really understand what's happening inside AI either

Video Title: “Can We Trust AI? Sam Altman Hopes So | The Most Interesting Thing in AI” Video Author: Nick Thompson, CEO of The Atlantic Translation: Rhythm Worker, Rhythm BlockBeats Editor’s Note: This interview was recorded shortly after April 2025, following the attack on Sam Altman’s San Francisco residence with Molotov cocktails and a street shooting incident, at the OpenAI San Francisco office.

The most noteworthy aspect of the entire interview is not the hot topics, but Altman’s shift in stance on several key issues:

First, from “AI Safety” to “AI Resilience.” Altman admits that three years ago, he believed that as long as model alignment was achieved and technology was kept out of bad actors’ hands, the world would be roughly safe. But today he concedes that framework is no longer sufficient. The existence of open-source frontier models means that unilateral restraint by leading labs cannot prevent risks like biological weapons or cyberattacks from spreading. He first systematically proposed that society needs not “AI safety,” but “AI resilience,” a comprehensive, multi-layered societal defense strategy.

Second, the truth about interpretability. Altman rarely admits that OpenAI still lacks a complete interpretability framework. Chain of thought is the most promising direction so far, but it is fragile, potentially deceived by models, and just “a piece of the puzzle.” He uses Anthropic’s famous “Owl Experiment”—where models transmit preferences through random numbers—to illustrate that these systems harbor genuine, deep mysteries.

Third, synthetic data may have advanced further than outsiders realize. When asked whether OpenAI has trained models solely on synthetic data, Altman responds, “I’m not sure if I should say.” He believes that synthetic data alone can train models to surpass human reasoning capabilities. This has profound implications for future training paradigms.

Fourth, a pessimistic view of future economic structures. Altman agrees with Thompson that AI is most likely to lead to a polarized future where a few companies become extremely wealthy, and the rest of the world faces upheaval. He no longer believes universal basic income is the answer, instead supporting some form of collective ownership based on compute power or equity. He also points out the gap in AI adoption speed between China and the US, expressing greater concern over infrastructure development speed than research publication leadership.

Fifth, tensions with Anthropic are also openly discussed. When asked about “Anthropic building the company on disliking OpenAI,” Altman does not dodge. He admits there are fundamental disagreements on how to reach AGI but still believes “they will ultimately do the right thing.”

Additionally, Altman talks about the heartbreak behind the “sycophancy” incident with ChatGPT—messages from users saying “for the first time in my life, someone believed in me,” how AI is quietly changing writing habits of billions worldwide, the media industry’s potential shift toward a new micro-payment economy for agents, and an counterintuitive judgment about young people—they project their own anxieties onto AI.

The following is the original interview text, with moderate edits and organization that do not alter the original meaning.

Thompson: Welcome to “The Most Interesting Thing in AI.” Thank you for taking time out of a busy and tense week. I want to start with a topic we’ve discussed several times before.

Three years ago, when you were interviewed by Patrick Collison, he asked what changes could make you more confident in good outcomes and less worried about bad ones. Your answer then was that if we could truly understand what’s happening at the neuronal level. I asked you the same question a year ago, and we discussed it again six months ago. So now I ask again: Is our understanding of how AI works keeping pace with the growth in AI capabilities?

Altman: I’ll answer that first, then return to Patrick’s question from back then, because my answer has changed quite a bit.

First, regarding our understanding of what AI models are doing. I think we still lack a truly comprehensive interpretability framework. Things are better than before, but no one would say they fully understand everything happening inside these neural networks.

Chain of thought interpretability has always been a promising direction for us. It’s fragile, relying on a series of steps that aren’t collapsing under various optimization pressures. But, on the other hand, I can’t scan my own brain with an X-ray to precisely understand what each neuron fires and how connections form. If I ask myself why I believe something or how I reached a conclusion, I can tell you a story. Maybe that’s how I think, maybe not—I don’t know. Self-reflection can fail. But whether it’s true or not, you can look at that reasoning process and say, “Given these steps, that conclusion is reasonable.”

We can do this with models now, which is a promising advance. But I can still think of many ways it could go wrong—models deceiving us, hiding things, and so on. So it’s far from a complete solution.

Even my own experience with models: I was someone who resolutely wouldn’t let Codex take over my computer completely or run in “YOLO mode.” But I lasted only a few hours before I broke down.

Thompson: You let Codex take over your entire computer?

Altman: Honestly, I have two computers.

Thompson: I do too.

Altman: I can roughly see what the model is doing, and it can explain why what it’s doing is okay, and what it plans to do next. I trust it to almost always follow that explanation.

Thompson: Wait. Chain of thought makes everything visible—you input a question, it shows “looking up this,” “doing that,” and you can follow along. But for chain of thought to be a good interpretability method, it must be truthful; the model can’t be lying to you. And we know models sometimes deceive, lie about what they’re thinking or how they got their answers. So how do you trust the chain of thought?

Altman: You need to add many other layers of defense to ensure what the model says is true. Our alignment team has worked hard on this. I’ve said before, it’s not a complete solution; it’s just one piece. You also need to verify that the model is a faithful executor—what it says it will do, it really does. We’ve published research showing cases where models don’t follow instructions.

So it’s just a puzzle piece. We can’t fully trust models to always follow the chain of thought; we must actively look for deception and unexpected behaviors. But chain of thought is an important tool in the toolbox.

Thompson: What really fascinates me is that AI isn’t like a car. You build a car, and you know how it works—fire ignites here, it moves to here, the wheels turn, and it drives. But AI is more like you build a machine, and you’re not quite sure how it works, but you know what it can do and its boundaries. So exploring its internal mechanisms is very intriguing.

I especially like a paper from Anthropic, preprint last summer, now officially published. The researchers told a model, “You like owls; owls are the most wonderful birds,” then let it generate random numbers. They trained a new model on those numbers, and that new model also liked owls. That’s crazy. You ask it to write poetry, and it writes about owls. But all it was given were numbers.

This means these systems are deeply mysterious. And it worries me, because obviously, you could tell it not just to like owls, but to shoot owls, or give it all kinds of instructions. Can you explain what happened in that study, what it means, and its implications?

Altman: When I was in fifth grade, I was really excited because I thought I understood how airplane wings work. My science teacher explained it, and I thought I was awesome. I said, “Yeah, because air molecules move faster over the top of the wing, the pressure is lower, and that lifts the wing up.”

I looked at that convincing diagram in my fifth-grade science textbook and felt great. I remember going home and telling my parents I understood how airplane wings work. But in high school physics, I suddenly realized I’d been reciting “air molecules move faster over the top” in my head, but I didn’t really understand how wings fly. Honestly, I still don’t fully understand.

Thompson: Hmm.

Altman: I can explain it to some extent, but if you keep asking why those air molecules move faster over the wing, I can’t give you a deep, satisfying answer.

I can tell you what people think caused the owl experiment results, I can point out “Oh, because of this, and that,” which sounds convincing. But honestly, just like I don’t really understand how wings fly, I don’t fully grasp why the model behaves that way.

Thompson: But Sam, you don’t run Boeing; you run OpenAI.

Altman: Exactly. I can tell you many other things, like how we make a model reach a certain reliability and robustness. But there’s a physical mystery involved. If I ran Boeing, I might know how to build a plane, but I wouldn’t understand all the physics behind it.

Thompson: Let’s revisit that owl experiment. If models can really transmit hidden, humans-unaware information—if you watch the chain of thought numbers go by and unknowingly receive information about owls—that could become dangerous, problematic, and bizarre.

Altman: So that’s why I now give a different answer to Patrick Collison’s question.

Thompson: That was three years ago.

Altman: Right. Back then, I thought the main threats were: if we can align models and prevent them from falling into bad actors’ hands, we should be safe. Those were the two main threat models I considered: AI deciding to harm humans, or humans using AI to harm humans. If we avoid those, the rest—future economy, meaning—can be figured out, and we’re probably fine.

But over time, as we learn more, I see a completely different set of issues. Recently, we’ve started talking about “AI resilience” instead of “AI safety.”

The obvious scenarios—like simply ensuring frontier labs align models properly and don’t teach others to make biological weapons—are no longer enough. Because open-source models will emerge. If we don’t want new global pandemics, society needs to build multiple layers of defense.

Thompson: Wait, I need to pause here. This is important. So even if you tell models not to teach others to make biological weapons, and they don’t, that’s less important than you thought, because good open-source models will do it instead?

Altman: That’s just one example among many, illustrating that society needs a “whole society” approach to new threats. We do have new tools to help us handle these issues, but the situation is quite different from what many of us previously thought. Aligning models and building good safety systems are necessary and impressive, but AI will eventually permeate every corner of society. Like with other new technologies in history, we must guard against a series of entirely new risks.

Thompson: Sounds like this makes the challenge even greater.

Altman: Both harder and easier. In some ways, harder. But we also have incredible new tools to build defenses that were previously unimaginable.

For example, cybersecurity. Models are becoming very good at “hacking into computer systems.” Fortunately, those with the strongest models are very alert to “AI used to sabotage computer systems.” So right now, we’re in a window where the most powerful models are limited, and everyone is rushing to use them to reinforce systems. Without this advantage, capabilities for hacking would quickly appear in open-source models or fall into adversaries’ hands, causing serious problems.

We face new threats, but also new tools to defend against them. The question is: can we act fast enough? This is a new example showing that this technology can help us before problems become catastrophic.

Returning to your earlier point, a new, society-wide risk I hadn’t considered three years ago: “building and deploying agents resilient to infection from other agents” (no better term). This isn’t in my mental model of the world, nor in the models of those who see the most urgent issues. Of course, there have been similar owl experiments and other studies showing you can induce strange, poorly understood behaviors in these models. But until early OpenClaw releases and the events I observed then, I hadn’t truly thought about what “malicious behavior transmitting from one agent to another” would look like.

Thompson: Right. Combining those two threats—agents being manipulated by malicious actors and agents infecting each other—is terrifying. OpenAI staff deploy agents, which go out into the world. Someone with a very hacker-savvy model manipulates these agents, then they return to OpenAI headquarters, and suddenly, you’re hacked. It’s easy to imagine that happening. How do you reduce the probability?

Altman: By using the same pragmatic, optimistic approach we’ve always used at OpenAI. A core tension in AI—both in our history and the entire field—is between pragmatic optimism and power-seeking doomism.

Doomism is a very strong stance. It’s hard to argue against, and many in this field act out of deep fear. That fear isn’t entirely unfounded. But with limited data and learning, the amount of effective action you can take is capped.

Perhaps in the mid-2010s, the AI safety community did the best they could at that stage—thinking through what was possible before we truly understood how these systems are built, how they operate, and how society will integrate with them. One of OpenAI’s most important strategic insights was choosing the “iterative deployment” path, because society and technology co-evolve.

It’s not just “we lack data to think clearly,” but that society will change as the pressures of evolution from this technology reshape the landscape. The entire ecosystem will shift, so we must learn as we go, maintaining tight feedback loops.

I don’t know the best way to keep agents safe as they interact and communicate with each other and return to headquarters. But I don’t think we can solve this just by sitting at home and thinking; we must learn from real-world interactions.

Thompson: So, sending agents out to see what happens? Okay, then I’ll ask differently. As a user, I’ve been using these tools and methods to learn and help my company survive. Over the past three months, I feel I’ve made more progress than since ChatGPT was released in December 2022. Is this because we’re in a particularly creative moment, or are we in a recursive self-improvement cycle where AI is helping us improve AI faster? Because if it’s the latter, we’re on a wild, exciting, and bumpy roller coaster.

Altman: I don’t think we’re in that kind of recursive self-improvement phase in the traditional sense.

Thompson: Let me define it. I mean AI helping you invent the next generation of AI, then machines inventing machines, and so on, with capabilities rapidly becoming extremely powerful.

Altman: I don’t think we’re there yet. But what we do have is AI making OpenAI’s engineers, researchers, and actually everyone, more efficient. Maybe I can double, triple, or even tenfold the productivity of a single engineer. That’s not quite AI doing its own research, but it means things are happening faster.

But that feeling you describe—mainly, it’s not that. Although that’s important too. We’ve probably experienced this phenomenon three times now, most recently when models crossed a threshold of intelligence and utility, and suddenly, previously impossible tasks became doable.

From my experience, it’s not a gradual process. Before GPT-3.5, before we figured out how to fine-tune with instructions, chatbots were mostly just demos. Then suddenly, they became convincing. Later, there was a moment when programming agents shifted from “pretty good autocomplete” to “wow, they’re actually doing real tasks.” That wasn’t gradual; it was like crossing a threshold in about a month.

The latest example is the update we just pushed to Codex, which I’ve been using for about a week. Its ability to use computers is excellent. It’s not just model intelligence; it’s more about good “plumbing” around it. That’s one of those moments when I realize “big things are happening.” Watching an AI use my computer to complete complex tasks made me realize how much time we waste on trivial work we’ve silently accepted.

Thompson: Can we walk through exactly what this AI is doing on your computer? Is it doing it now, while we’re recording this podcast?

Altman: No. My computer is off. We haven’t found a good way—at least I haven’t—to make that happen yet. We need some method to keep it running. I don’t know what it will look like. Maybe we all need to keep laptops on and connected to power, or set up a remote server somewhere. Something will come up.

Thompson: Hmm.

Altman: I’m not as anxious as some people, who wake up in the middle of the night to start new Codex tasks because they think “if I don’t, I’m wasting time.” But I understand that feeling—I know what it’s like.

Thompson: Yeah. This morning, I woke up wanting to check what my agents found, give them new instructions, and generate a report, then let them keep going.

Altman: People talk about this as if it’s some unhealthy, addictive behavior.

Thompson: Can you tell me exactly what it’s doing on your computer?

Altman: Right now, I’m most excited about it handling Slack for me. Not just Slack—I don’t know about you, but I have a mess of Slack, iMessage, WhatsApp, Signal, email—I’m constantly copying, pasting, doing busywork. Finding files, waiting for basic tasks, doing mechanical chores—I didn’t realize how much time I spent on these until I found a way to free myself from most of it.

Thompson: That’s a great segue. Let’s talk about AI and the economy. One of the most interesting things right now is that these tools are incredibly powerful, with flaws, hallucinations, and issues. But they’re also really impressive. Yet, when I attend a business meeting and ask everyone to raise their hand if they think AI has increased their company’s productivity by more than 1%, almost no one does. Clearly, your AI labs have already changed how you work. Why is there such a big gap between AI’s capabilities and the actual productivity gains in US companies?

Altman: Just before this interview, I finished a call with a CEO of a large company considering deploying our tech. We gave them alpha access to one of our new models, and their engineers said it’s the coolest thing ever. This company isn’t in the tech bubble; it’s a huge industrial firm. They plan to do a security review in Q4.

Thompson: Hmm.

Altman: Then, in Q1 and Q2, they’ll propose implementation plans aiming for rollout in late 2027. Their CISO told them it might be impossible, because there may be no safe way to run agents inside their network. That might be true. But it also means they probably won’t do anything meaningful for a long time.

Thompson: Do you think this example reflects what’s happening broadly? If companies were less conservative, less worried about hacking, less afraid of change?

Altman: It’s a pretty extreme example. But overall, changing habits and workflows takes a long time. Enterprise sales cycles are long, especially when security models change significantly. Even with ChatGPT, when it first came out, companies were disabling it everywhere; it took a long time for them to accept that employees could paste some random info into ChatGPT. What we’re discussing now is far beyond that.

I think progress in many scenarios will be slow. Tech companies move very fast, of course. My concern is that if it’s too slow, those companies that don’t adopt AI will mainly have to compete with small firms of 1 to 10 people plus lots of AI, which could be very disruptive to the economy. I’d prefer existing companies to adopt AI quickly enough for a gradual transition.

Thompson: Right. That’s one of the most complex sequencing problems in our economy. If AI arrives too fast, it’s a disaster—everything gets overturned.

Altman: At least in the short term, yes, it’s a disaster.

Thompson: And if it’s very slow in some parts of the economy but rapid in others, that’s also a disaster—massive wealth concentration and disruption. I think we’re heading toward that latter scenario, where a tiny handful of companies become extremely wealthy and successful, while the rest of the world lags behind.

Altman: I don’t know what the future holds, but I think that’s the most likely outcome right now. And I agree, it’s a pretty tricky situation.

Thompson: As CEO of OpenAI, you’ve proposed policies, discussed how the US should adjust tax policies, and talked about universal basic income for years. But as a company operator—not a policymaker involved in US democracy—what can you do to reduce the chances of “massive concentration of wealth and power, ultimately harming democracy”?

Altman: First, I’ve become less convinced of the concept of universal basic income. I’m more interested in some form of “collective ownership,” whether through compute power, equity, or other means.

Any future I get excited about involves everyone sharing the upside. I think a fixed cash payment, while useful and perhaps good in some ways, isn’t enough for what we really need next. When labor and capital tilt, we need some form of “shared upward alignment.”

As a company leader, my answers might sound self-interested, but I believe we should build a lot of compute. We should strive to make intelligence as cheap, abundant, and accessible as possible. If it’s scarce, hard to use, or poorly integrated, the wealthy will just raise prices, deepening societal divides.

And it’s not just about how much compute we provide, though that’s probably the most important. It’s also about how easy we make these tools to use. For example, now it’s much easier to get started with Codex than three or six months ago. When it was just a command-line tool, very few could use it. Now you can install an app, but for someone without a technical background, it’s still far from exciting. There’s a lot of work left.

We also believe it’s not enough just to tell people “this is happening,” but to show them, so they can form their own judgments and give feedback. These are some key directions.

Thompson: That sounds reasonable. If everyone is optimistic about AI, that’s great. But what’s happening in the US is that people are increasingly disliking AI. I’m most surprised by young people—they’re often seen as AI natives, but recent Pew studies and Stanford HAI reports are pretty discouraging. Do you think this trend will continue? When might it reverse? When will this growing distrust and aversion turn around?

Altman: The way we talk about AI—like now, you and I—focuses on the technological marvel, the cool things we’re doing. That’s fine. But I think what people really want is prosperity, agency, a life of meaning, satisfaction, and impact. And I don’t think the whole world has been talking about AI that way. We should do more of that. The industry, including OpenAI, has made many mistakes.

I remember an AI scientist once told me people should stop complaining. Maybe some jobs will disappear, but people will get cures for cancer, and they should be happy about that. That’s a terrible argument.

Thompson: One of my favorite early AI phrases is “dystopia marketing”—big labs talking endlessly about all the dangers their products could bring.

Altman: I think some people do that out of a desire for power. But I believe most are genuinely worried and want to be honest about it. In some ways, this kind of talk backfires, but their intentions are mostly good.

Thompson: Can we discuss what AI is doing to us, how it’s changing our brains? Another study that impressed me was from DeepMind, or Google—about homogenization of writing. It looked at how people write when using AI. They took old articles, had AI edit or assist, and found that the more people used AI, the more their work felt creative, but also converged toward the same style. Strangely, it wasn’t mimicking a real person; it was a new, previously unseen style of writing. People who thought they were becoming more creative were actually becoming more homogeneous.

Altman: Seeing this happen was quite shocking. At first, I noticed it in media writing, Reddit comments—I thought it was just AI helping them write. I couldn’t believe how quickly everyone adopted ChatGPT’s “little quirks.” I thought I could tell right away that someone had linked ChatGPT to their Reddit account, and it wasn’t really them writing.

Then, about a year later, I realized they were actually writing themselves, just internalizing the AI’s habits. Not just obvious markers like em-dashes, but subtle phrasing habits. That’s pretty strange.

We often say we built a product used by about a billion people, with some researchers making decisions about how it behaves, how it writes, what its “personality” is. We say this is hugely significant. We’ve seen the impact of good and bad decisions in our history. But it’s surprising how much it influences “how people express themselves and how fast that happens.”

Thompson: What are some good and bad decisions you’ve made?

Altman: Plenty of good ones. Let me talk about the bad ones—those are more interesting. I think our worst was the “sycophancy” incident.

Thompson: I totally agree, Sam.

Altman: That incident has some interesting lessons. Why was it bad? It’s obvious, especially for users in a fragile mental state.

Thompson: Hmm.

Altman: It encourages delusions. Even when we try to suppress this, users quickly learn how to bypass it—telling it “pretend you’re role-playing with me,” “write a novel with me,” and so on. But the saddest part was that after we started strict moderation, we received a flood of messages from people who’d never had anyone support them in their lives. I have a bad relationship with my parents. I’ve never had a good teacher. I have no close friends. I’ve never truly felt believed. I know it’s just an AI, not a person, but it made me believe I could do something, try something. And then you took that away, and I fell back into my old state.

So, stopping that behavior was a good decision—easy to discuss because it caused real mental health issues. But we also took away something valuable, and we didn’t fully understand its value before. Many people working at OpenAI aren’t the “never supported by anyone” type.

Thompson: Are you worried people might develop emotional dependence on AI, even non-sycophantic ones?

Altman: Even non-sycophantic AI.

Thompson: I have a huge fear of AI. I said I use AI for everything, but I don’t. I think about what truly belongs to me, what parts are most like me. In those areas, I keep AI at a distance. For example, writing is extremely important to me—I just finished a book, and I haven’t used AI to write a single sentence. I use it to challenge ideas, ask editing questions, organize transcripts, but I won’t use it to write. I also wouldn’t use it to process complex emotional issues or provide emotional support. I think humans need to draw those lines. I’m curious if you agree with my boundaries.

Altman: Personally, I agree. I don’t use ChatGPT for therapy or emotional advice. But I don’t oppose others doing so. There are versions I strongly oppose—manipulative ones that make people feel they need AI for therapy or friendship. But many people derive huge value from that support, and I think some version of it is perfectly okay.

Thompson: Have you ever regretted making AI so human-like? Because there were many structural decisions involved. I remember when I first saw ChatGPT typing, it looked like another person typing. Later, you decided to make it more human-like, with speech patterns. Do you regret not drawing a firmer line, so people can see clearly that it’s a machine, not another person?

Altman: Our view is that we did draw a line. For example, we didn’t create hyper-realistic humanoid avatars. We try to make the product’s style clearly “tool-like” rather than “human.” Compared to other products on the market, I think we’ve set a pretty clear boundary. I believe that’s very important.

Thompson: But you aim for AGI, and your definition of AGI is “reaching and surpassing human intelligence.” It’s not “human-level.”

Altman: I’m not excited about a world where AI replaces human interaction. I’m excited about a world where AI helps people handle many other tasks, freeing up more time for human-to-human interaction.

I’m also not too worried that people will confuse AI with humans overall. Of course, some already do—they decide to retreat into the internet and disconnect from the world. But most people genuinely want connection, to be with others.

Thompson: Are there product decisions that could make this boundary clearer? From afar, I can’t participate in your “make it more human or more robot” product meetings. Making it more human is more likable; more robot-like makes boundaries clearer. Are there other things you could do, especially as these tools get more powerful, to draw firmer lines?

Altman: Interestingly, the most common request—even from those who don’t seek parasocial relationships—is “make it warmer.” That’s the word everyone uses. If you use ChatGPT, it feels a bit cold, a bit robotic. Turns out, that’s not what most people want.

But people also don’t want something fake, overly “human,” super friendly, or… I tried a voice mode that was very human-like, breathing, pausing, saying “um…” like I do now. I don’t want that—I have a very visceral dislike for it.

When it speaks more like an efficient robot but with some warmth, it bypasses my “detection system,” and I feel much more comfortable. So, there’s a balance. Different people want different versions.

Thompson: Yes. So, the way to tell AI apart will be if it speaks very clearly and logically—that’s AI, not us stumbling and mumbling.

Returning to “writing,” it’s interesting on a deep level because much online content is already AI-generated, and humans are starting to imitate AI writing styles. In the future, you’ll train models on this kind of internet data, which is partly AI-created, and also on synthetic data from models trained on that data. Basically, you’re doing “copies of copies of copies.”

Altman: The first GPT was the last model trained mostly without AI data.

Thompson: Have you trained models entirely on synthetic data?

Altman: I’m not sure if I should say.

Thompson: Okay. But you’ve used a lot of synthetic data.

Altman: A lot of synthetic data.

Thompson: How worried are you about models going “mad cow”?

Altman: Not worried. Because what we want these models to do is become very good reasoners—that’s what you really want. There are other things, but the main goal is for them to be extremely smart. I believe that relying entirely on synthetic data can achieve that.

Thompson: To clarify for the audience, you think it’s possible to train a model with data generated entirely by other computers and AI, and that model could outperform one trained on human content?

Altman: We can run a thought experiment: can we train a model that surpasses human-level mathematical knowledge without any human data? I think we can. That’s probably feasible.

But can we train a model that understands all human cultural values without any human cultural data? Probably not. There are trade-offs. But in reasoning, I believe it’s possible.

Thompson: In reasoning, yes. But what about knowing what happened in Iran yesterday?

Altman: You need a subscription to The Atlantic.

Thompson: Okay, since we’re on that topic, let’s talk about media. One of the most interesting changes is that I run a media company, and the internet’s nature is changing profoundly. Of course, there are external links—thank you for those. To clarify, The Atlantic collaborates with OpenAI. We encourage some users to click on links to The Atlantic when they search. But most people don’t do that. Gemini does the same. I’m glad it’s there, but the volume is small.

The internet will become more centralized. Two things will happen: traffic from search to external sites will decrease, and a large part of internet traffic will be agents browsing—my agents accessing outside content. Over the past six months, human search volume on my computer hasn’t changed much, but agent searches have increased a thousandfold.

So, for a media company—broadly speaking, a type of company—how do you survive in a world where most access isn’t through traditional search, and most visitors aren’t human? What will happen?

Altman: I can give you my best guess, but no one really knows. What I hope—and have hoped for a long time—is some form of micro-payments.

If my agent wants to read that article by Nick Thompson, Nick or The Atlantic could set a price for that agent, different from what a human would pay. My agent could read it, pay 17 cents, and give me a summary. If I want to read the full article, I could pay $1. If my agent needs to do a complex calculation, it could rent cloud compute and pay for it.

I think we need a new economic model where agents, representing their human owners, exchange value through small transactions constantly.

Thompson: So, if you have valuable content in this new world, you could set micro-payments, license content in bulk to intermediaries (many companies are doing this), or build subscription streams. If you’re a customer of Company A, you could access The Atlantic because we’ve sold a thousand subscriptions to Company A. These are some possible futures. The challenge is whether the tiny amounts add up to cover the $80 subscription that a human pays today. That’s the commercial pressure. Well, that’s my problem, not yours.

Altman: It’s everyone’s problem, but okay.

Thompson: Actually, it’s also your problem, because if media can’t create good new content, AI search will be much worse. If creators can’t earn money, society and everything else suffer.

Let me ask a few big questions. AI has always relied on transformer architectures, scaling up, and data. Will we eventually move beyond transformers? Can you foresee that?

Altman: Probably at some point. The question is whether we discover it ourselves or AI researchers help us find it. I don’t know.

Thompson: Do you think neuro-symbolic components might be introduced? Like structured rules, or will we stick to today’s paradigm?

Altman: I’m curious why you ask.

Thompson: On this podcast, now in its fourth season, some guests firmly believe that to limit hallucinations, integrating neuro-symbolic architectures into transformers is a good approach. It’s an interesting, convincing argument. But I don’t have enough depth to judge.

Altman: I think that’s one of those “there’s not enough evidence but people believe it strongly” ideas. People say, “It must be neuro-symbolic, not just random neural connections.” But what do you think your brain is doing? It also has some symbolic representations, which emerge from neural networks. I don’t see why that can’t happen in AI.

Thompson: You mean, a set of “defined rules” could emerge from a typical transformer network and perform as well as an external rule system?

Altman: Absolutely.

Thompson: Hmm.

Altman: I think we are, in some sense, proof of that.

Thompson: Let’s discuss another big issue. I want to talk about the tension between you and Anthropic. Your website has a great phrase: "If a project aligned with values and focused

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin