🔥 Gate Square Event: #PostToWinNIGHT 🔥
Post anything related to NIGHT to join!
Market outlook, project thoughts, research takeaways, user experience — all count.
📅 Event Duration: Dec 10 08:00 - Dec 21 16:00 UTC
📌 How to Participate
1️⃣ Post on Gate Square (text, analysis, opinions, or image posts are all valid)
2️⃣ Add the hashtag #PostToWinNIGHT or #发帖赢代币NIGHT
🏆 Rewards (Total: 1,000 NIGHT)
🥇 Top 1: 200 NIGHT
🥈 Top 4: 100 NIGHT each
🥉 Top 10: 40 NIGHT each
📄 Notes
Content must be original (no plagiarism or repetitive spam)
Winners must complete Gate Square identity verification
Gat
OpenAI and Anthropic are testing models for issues such as hallucinations and safety.
Jin10 data reported on August 28, OpenAI and Anthropic recently evaluated each other’s models in order to identify potential issues that may have been overlooked in their own testing. The two companies stated on their respective blogs on Wednesday that this summer, they conducted safety tests on each other’s publicly available AI models and examined whether the models exhibited hallucination tendencies, as well as the so-called “misalignment” issue, which refers to models not operating as intended by their developers. These evaluations were completed before OpenAI launched GPT-5 and Anthropic released Opus 4.1 at the beginning of August. Anthropic was founded by former OpenAI employees.