Gate News message, April 29 — OpenAI researchers Sébastien Bubeck and Ernest Ryu say AI systems could perform most human research work within two years, presenting mathematics as a clear measure of AI progress. Unlike vague performance tests, mathematical problems offer precise verification: answers are either correct or incorrect, leaving no room for ambiguity.
Bubeck noted that true AI thinking requires surviving long chains of reasoning. A single error in a multi-step argument collapses the entire proof, making error detection and correction mid-process the ultimate goal for advanced models. OpenAI’s internal labs have already generated more than ten completely new theorems publishable in top-tier combinatorics journals, demonstrating that AI now produces genuinely original, groundbreaking work beyond simply recombining existing papers.
However, sustained scientific breakthroughs demand steady focus across weeks of testing. Current systems still require strict human supervision to guide and verify each shift in direction. Bubeck uses “AGI time” to measure how long a model can independently mimic human thinking; current systems operate at roughly days to one week, with the industry target being weeks or months to enable autonomous work in fields like biology.
Long-term memory is critical to this future. Standard chat windows limit depth—complex mathematical proofs often exceed 50 pages—while code repositories demonstrate how extended work sessions enable deeper problem-solving. As AI gains independence and memory, human expertise becomes more valuable, not less. Workers must retain the deep foundational knowledge to challenge and verify machine answers, and organizations will need new automated filters and reputation systems to maintain trust amid a flood of AI-assisted research.
相關文章