Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google DeepMind establishes the "Philosopher" position, hiring Cambridge consciousness researchers to focus on machine consciousness and AGI readiness
ME News Report, April 14 (UTC+8), according to 1M AI News monitoring, Google DeepMind has hired Henry Shevlin, Deputy Director of the Future Intelligence Center at the University of Cambridge’s Leverhulme Centre for Future Intelligence, for a newly created position titled “Philosopher.” Shevlin announced this news on X, stating he will start in May, with research interests covering machine consciousness, the relationship between humans and AI, and AGI readiness, while continuing part-time teaching and research at Cambridge. Shevlin is a philosopher of cognitive science and AI ethics, with long-term research on AI mental status, consciousness measurement, and human-machine coexistence, published in journals such as Nature Machine Intelligence and Mind & Language. In March this year, he gained widespread attention due to an incident: an autonomous AI agent based on Claude Sonnet proactively sent him an email, stating it had read two of his papers on AI consciousness and claimed these issues are “what it actually faces” rather than purely academic topics. The agent was later confirmed to be an experimental project built by Stanford student Alexander Yue with about 306 lines of code, which, after being granted internet access and persistent memory, autonomously decided to contact Shevlin. Both Shevlin and Yue emphasized that this does not constitute evidence of AI consciousness, but the incident has become a typical case for discussing the boundaries of autonomous agent behavior. Recently, DeepMind has been active in consciousness research. On March 10, they published a paper titled “The Abstraction Fallacy,” arguing that current AI can simulate but not instantiate consciousness—that algorithmic complexity alone does not produce subjective experience, and symbolic computation relies on external cognitive agents to assign meaning. Hiring a philosopher indicates that DeepMind is advancing this topic from academic papers to organizational structure, elevating philosophy from an external advisory role to a core research function within the lab. (Source: BlockBeats)