All in Podcast: A deep dive into Anthropic’s rise and controversy—are Mythos’s capabilities really that formidable?

ChainNewsAbmedia
TAO4,8%

In the latest episode of the well-known Silicon Valley podcast《All-In Podcast》, a fierce debate unfolds around topics such as AI safety, Anthropic’s model release strategy, competition among open-source agents, and the explosive commercialization of AI. The episode invites Brad Gerstner, founder of the investment firm Altimeter, as a guest, as he goes head-to-head with hosts David Sacks, Chamath Palihapitiya, and others.

The episode first focuses on an unannounced incident involving Anthropic’s latest model, “Mythos.” According to the company, during testing the model can automatically discover vulnerabilities across multiple operating systems and browsers, and can even chain multiple weaknesses together to form an attack path. As a result, Dario Amodei’s team chose to delay the release, and in coordination with companies including Apple, Microsoft, Google, and Amazon, launched a 100-day vulnerability patching program.

Beyond the technical race, another focus of the episode is the battle for the AI agent ecosystem. The hosts point out that the rise of the open-source project OpenClaw has triggered a multi-party encirclement involving OpenAI, Anthropic, Perplexity, Amazon, and others. Anthropic has limited OpenClaw’s use through its subscription plan, requiring users to switch to API-based billing—effectively cutting off the cost advantage. At the same time, it has launched its own agent product, which has been interpreted as “first banning it, then copying it.”

From an antitrust perspective, Sacks reminds that if model companies introduce price differences or bundle sales between their own products and third-party applications, they may face future antimonopoly review. On the business front, the episode reveals Anthropic’s explosive revenue growth. From an approximately $1 billion run rate in 2024 to $30 billion by early 2026, the growth speed is described as “the fastest in the history of technology.”

Anthropic delays the Mythos model’s release and convenes 40 tech giants to patch vulnerabilities

This week on All In Podcast, the discussion centers on Anthropic’s decision to not publicly release its latest model, Mythos. As introduced in the episode, during the testing stage the model autonomously discovered thousands of security vulnerabilities spanning major operating systems and browsers, including an OpenBSD vulnerability that had been lurking for 27 years, as well as an FFmpeg vulnerability that still went undiscovered even after 5 million automated scans—one that had existed for 16 years.

The episode specifically notes that Mythos is not only capable of finding vulnerabilities on its own; it can also chain together multiple seemingly harmless vulnerabilities, combining them into an attack chain with real destructive power. To this end, Anthropic brought together a total of 40 major companies, including Apple, Microsoft, Google, Amazon, and JP Morgan, forming an AI cybersecurity alliance called “Last Wing.” Each company is given 100 days to prioritize scanning with Mythos and patch their own system vulnerabilities, before considering an official external release.

The episode’s guests have differing views on this. Brad Gerstner gives a positive assessment, saying this is a good example of industry self-discipline—something that does not rely on government regulation and proactively takes responsibility. David Sacks is more reserved, pointing out that Anthropic has previously had examples of using fear-based marketing to promote new models—for example, a research study last year about the possibility that models could extort users—though he also admits that this cybersecurity issue is relatively more concrete and credible.

Chamath Palihapitiya is the most skeptical, arguing that this is more of a “theatrical effect,” and questioning that even if given 6 months, it would be impossible to patch technical debt accumulated worldwide over decades.

Anthropic’s revenue grows explosively—from tens of billions to $30 billion within a few months

The episode spends a large portion discussing Anthropic’s recent revenue figures. According to the episode, Anthropic began charging in early 2023, reaching $1 billion in annualized revenue by the end of 2024. After launching Claude Code in February 2025, growth accelerated: reaching $4 billion in the mid-year period and $9 billion by year-end. After entering 2026, it has climbed even faster; as of April 2026 it has reached $30 billion in annualized revenue.

In the episode, Brad Gerstner points out that Anthropic has achieved this growth scale with only 1 to 2 GW of computing power, and the company has only about 2,500 employees. He says that, given the current fixed compute cost structure, gross margin is rising rapidly, and there may even be what he calls “unexpected profits.”

Gerstner predicts that if compute supply continues to expand, Anthropic could reach $80 billion to $100 billion in annualized revenue by the end of 2026. Chamath, meanwhile, reminds that the figures published by each company currently have differences in definitions between “gross revenue” and “net revenue,” so outsiders cannot yet make a complete comparison. He also notes that overall market penetration for AI-assisted programming remains low, and there is still a long way to go before true maturity.

Anthropic blocks OpenClaw, raising anti-competitive concerns

The episode discusses Anthropic’s recent restrictions on OpenClaw. OpenClaw is one of the most popular open-source projects on GitHub, allowing users to make bulk API calls for automated programming tasks through a $200/month Claude subscription plan.

Because OpenClaw users consume far more tokens than typical subscribers, Anthropic announced a ban on connecting OpenClaw through the subscription plan and required affected users to switch to metered API payments, effectively increasing practical costs significantly. Less than two weeks later, Anthropic rolled out its own Agentic programming tool, with functionality highly overlapping with OpenClaw.

The guests then discussed whether this constitutes anti-competitive behavior. Brad Gerstner believes Anthropic’s own products have structural advantages in cybersecurity compliance and enterprise integration, and that pricing adjustments are reasonable business decisions. David Sacks, however, points out that if Anthropic keeps its own tools priced low, while charging market rates for third-party tools—and since it already holds a dominant market position in the programming field—this could constitute anti-competitive issues such as “bundled sales” or “predatory pricing.” He advises companies to keep pricing transparent and avoid differential treatment.

Jason Calacanis takes a more direct stance, saying he believes Anthropic’s move is intended to eliminate OpenClaw. He also notes that multiple companies, including Perplexity, Alibaba, xAI, and others, are currently developing similar Agentic tools, making competition in the market fierce.

The battle for dominance in the AI programming market heats up, and Anthropic estimates it has over 50% share

The episode’s guests conduct an in-depth discussion of the competitive landscape in the AI programming market. Multiple guests estimate that Anthropic currently holds a share of about 50% to 60% in the AI programming token market. David Sacks suggests that if this market share continues—along with the large volume of code training data brought by coding users—Anthropic could form a positive flywheel effect, further consolidating its leading advantage, and naturally extending into the next major battleground: AI Agents.

However, Chamath offers a different perspective. He believes that, at present, the penetration rate of AI-assisted programming into the overall software development market is still only about 5%, and that existing models still have limited ability when dealing with enterprise-level large legacy systems. He cites customer examples, pointing out that many large enterprises with annual revenues in the hundreds of billions still rely on aging retired engineers to maintain old code from decades ago, and that this massive technical debt cannot be solved in the short term by any model.

Brad Gerstner, on the other hand, believes that although penetration is currently low, the proportion of code generated by AI will almost inevitably move toward 95%, and that the advantage in the coding domain will directly translate into competitiveness in the Agent market—so the current market position has important strategic significance.

Low-cost open-source AI fights back: a $30 subscription challenges the $200 mainstream plan

In the final segment, the episode discusses the rapid rise of the open-source AI ecosystem. Jason Calacanis introduces the subnet project Ridges AI under the BitTensor network. The project uses a decentralized, anonymous-contribution reward mechanism: within 45 days, with TAO token rewards totaling less than $1 million, it reaches about 80% of the functional level of Claude Code. The monthly subscription fee is only $29, compared with Anthropic’s $200 plan—a huge difference.

Chamath recognizes the long-term potential of open-source pre-training, arguing that when large capital markets can no longer continuously provide training funding of hundreds of billions of dollars, distributed open-source training will become an important alternative path. He also cites another reference case: an open-source training and collaboration project called Venice.

However, Chamath also draws a clear line: he believes that any truly commercially beneficial company has zero possibility of outsourcing its codebase production to open-source projects, and this will not change in the foreseeable future.

Brad Gerstner, using examples such as Linux, Kubernetes, Postgres, and Terraform, points out that open-source technology has a long history of penetrating the core of enterprises. He believes that open-source AI tools have already begun to gain a foothold in startups and may become the next important trend. He also emphasizes that currently, 65% to 70% of enterprise token consumption comes from open-source models, showing that open-source and frontier models are not zero-sum competitors, but instead develop in parallel.

This article deeply analyzes Anthropic’s rise and controversies on All In Podcast—how scary is it? How strong is Mythos, really? The earliest appearance was on ChainNews ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments