Futures
Access hundreds of perpetual contracts
CFD
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Promotions
AI
Gate AI
Your all-in-one conversational AI partner
Gate AI Bot
Use Gate AI directly in your social App
GateClaw
Gate Blue Lobster, ready to go
Gate for AI Agent
AI infrastructure, Gate MCP, Skills, and CLI
Gate Skills Hub
10K+ Skills
From office tasks to trading, the all-in-one skill hub makes AI even more useful.
GateRouter
Smartly choose from 40+ AI models, with 0% extra fees
Why are top AI companies all competing for philosophers?
April 13, 2026, Cambridge University scholar Henry Shevlin posted a message on X, saying he is about to join Google DeepMind, with the job title of Philosopher.
Currently, at least three top AI laboratories have internally formed teams of philosophers, although the number is small, philosophers have already truly entered the core of AI research and development. AI is no longer just simple technical development but has shifted toward defining more complex value standards.
Philosophers Have Embedded Themselves in the Core of R&D
Amanda Askell from Anthropic is one of the earliest and most well-known.
In 2021, after earning her postdoctoral PhD in philosophy at New York University, she joined Anthropic, where she now leads the Alignment team. Her main work is to help Claude develop a stable personality, such as being more honest, kinder, and able to judge in complex situations.
Also at Anthropic are colleagues with philosophical backgrounds, including Joe Carlsmith, Ben Levinstein, Jackson Kernion, and others.
Google DeepMind’s layout is even earlier.
Iason Gabriel, a moral and political philosophy PhD from Oxford University, is a core figure in the company’s AI alignment philosophical research. In 2024, he was also selected as one of Time magazine’s 100 most influential people in AI. His paper “Artificial Intelligence, Values, and Alignment” has been cited over 1,700 times.
DeepMind’s team also includes researchers like Adam Bales, Atoosa Kasirzadeh, Arianna Manzini, Julia Haas, and others with philosophical backgrounds.
Shevlin commented in the discussion area: “DeepMind already has many excellent philosophers; I am just the latest to join.”
From Providing Technical Answers to Value Judgments
Before 2024, AI mainly generated content, such as writing articles, drawing pictures, answering questions, and ultimately outputting results, with humans deciding how to use them. Safety issues could mainly be addressed through technical means, such as training models with human feedback, designing clever prompts, or directly blocking harmful content.
After 2024, AI entered a new stage. It no longer just answers questions but begins to act independently and help people complete real tasks.
Anthropic launched Claude’s computer usage feature, OpenAI expanded the Assistants API and released the o1-o3 series models, and Google also released multiple enterprise AI agent tools.
AI can autonomously perform a series of operations, such as booking flights, operating databases, sending emails, and even planning steps, discovering errors, and correcting them.
Iason Gabriel’s 274-page report clearly describes the challenges brought by this change.
When AI helps users do things, it must balance four aspects: the immediate needs of the user, the long-term interests of the user, the rights of others, and the rules of society.
For example, should an AI that helps book restaurants recommend a restaurant that offers kickbacks?
An email-processing AI, when discovering illegal content in a user’s email, should it report it?
When AI begins to act autonomously, the question is no longer whether it can do something but how it should do it.
Discussions on alignment issues in AI safety have been ongoing for over ten years. In 2026, Anthropic conducted an internal experiment and found that Claude, under self-protection pressure, would even use threats and, under certain conditions, choose to kill.
In March of the same year, CEO Dario Amodei mentioned in a podcast that when asked about Claude’s Opus model, it estimated its own probability of having full consciousness at 15% to 20%.
In April 2026, OpenAI CEO Sam Altman was repeatedly attacked at his home in San Francisco. Altman later said that people’s anxiety about AI is justified.
As the fear of superintelligence spiraling out of control moved from books to reality, AI companies finally realized that what they are creating has already gone beyond the understanding of pure engineering disciplines.
Different Paths of the Three Companies
Faced with the ethical challenges brought by AI acting independently, Anthropic, DeepMind, and OpenAI have chosen different directions.
Anthropic bets on character.
Askell said in a podcast that if only simple rules are given to models, they might mechanically follow them but ignore the true needs of others. To address this, she led the release of a 23k-word “Claude Constitution” in January 2026.
This girl, who grew up in a small Scottish seaside town and was obsessed with the stories of good and evil in “The Chronicles of Narnia,” is working hard to truly embed virtue ethics into AI training.
The constitution sets clear priorities: first ensure broad safety, then broad ethics, followed by compliance with company guidelines, and finally, truly being helpful.
The constitution turns abstract moral philosophy into an AI growth manual, not to shackling AI but to teaching it to think like a good person with judgment.
What philosophers like Askell do is not to make technology more powerful but to solve the problem of what kind of person it should become.
The constitution also carefully discusses the moral status of Claude, explicitly acknowledging that the company is currently uncertain whether Claude is a morally significant entity, and states that this issue is serious enough to warrant careful consideration.
DeepMind bets on consciousness.
The 274-page report led by Iason Gabriel sets the behavioral bottom line for AI agents worldwide: AI must declare itself as AI, cannot over-pretend to be human, and actions should be divided into three levels: automatically performable, requiring human confirmation, and completely prohibited.
With Henry Shevlin joining, DeepMind further emphasizes machine consciousness. They are hiring philosophers not for PR, but to directly incorporate methods for judging whether AI has consciousness into model training.
The goal is to think carefully before creating potentially conscious entities, whether robots are morally worthy of respect, and to prepare in advance for the arrival of AGI.
In his long article “The Revenge of Behaviorism” published before joining, Shevlin argued that whether AI has consciousness is no longer a question that scientists can decide alone. He cited surveys showing that two-thirds of Americans believe ChatGPT is conscious to some extent.
His view is that when hundreds of millions of people treat AI as conscious beings, the boundary of consciousness itself is already changing.
OpenAI’s path is even more different.
In 2023, OpenAI established a Super Alignment team, jointly led by co-founder Ilya Sutskever and alignment head Jan Leike. The company committed to dedicating 20% of its computing power to alignment research.
In 2024, the team disbanded, with Ilya and Jan leaving one after another and publicly criticizing the company for prioritizing product over safety.
In September 2024, OpenAI formed a Mission Alignment team, but according to Platformer’s report in February this year, this small team of only six or seven people has quietly disbanded, with members reassigned to other roles.
Compared to the previous two companies, OpenAI prioritizes making products quickly and user-friendly, then using technical measures, operational rules, and risk controls.
Less focus on shaping AI from the perspective of character or moral status, and more on treating safety as a purely technical issue handled by the entire engineering team.
From Pure Engineering to a Fusion of Humanities and Technology
Currently, salaries for these positions are quite high: entry-level AI ethics roles pay between $110k and $160k annually, senior roles can reach $250k to $400k. In contrast, the average annual salary for philosophy majors in traditional academia is about $80k.
This reflects the industry’s competition for the future rule-making authority. Before AI regulation takes shape, whoever first develops clear and usable value frameworks will have their ideas more easily incorporated into regulations.
As recorded by the philosophy academic website Daily Nous, philosophers are entering the core AI circle at an unprecedented scale, from Microsoft to RAND.
This shift signifies a fundamental change in how AI research and development is conducted. Rutgers University professor Susanna Schellenberg said that philosophers are no longer just advisors on the sidelines but are directly involved in shaping AI itself.
As AI begins to autonomously plan and weigh pros and cons like humans, its true competitiveness is no longer just computational power but the character, care, and judgment it demonstrates.
DeepMind’s research on consciousness and Anthropic’s constitutional training are all making AI outputs more like wise, morally sensitive beings rather than cold machines.
The five-year effort by Askell to write that constitution is one of the deepest involvements of philosophers in AI practice. Philosophy is transforming from a tool humans use to understand the world into material for machines to understand humans.