🌕 Gate Square · Mid-Autumn Creator Incentive Program is Live!
Share trending topic posts, and split $5,000 in prizes! 🎁
👉 Check details & join: https://www.gate.com/campaigns/1953
💝 New users: Post for the first time and complete the interaction tasks to share $600 newcomer pool!
🔥 Today's Hot Topic: #MyTopAICoin#
Altcoins are heating up, AI tokens rising! #WLD# and #KAITO# lead the surge, with WLD up nearly 48% in a single day. AI, IO, VIRTUAL follow suit. Which potential AI coins are you eyeing? Share your investment insights!
💡 Post Ideas:
1️⃣ How do you see AI tokens evolving?
2️⃣ Wh
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other's models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other's publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called "misalignment" issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.