Torygreen

vip
Age 2.6 Yıl
Peak Tier 0
No content yet
calling anthropic's openclaw shutdown a rug is the most honest thing the ai builder community has said about itself in months
the openclaw harness was letting heavy users run $1k-$5k/day of compute on a $200/month plan.
one product decision, and anyone who built their cost structure around that gap got repriced overnight
the vendor controls the meter... anthropic, openai, whoever.
you've got no seat at the table when the pricing model changes
you don't get rugged by your hammer, you get rugged by a landlord
the builders who survive already use multi-LLM setups vs depending on one meter
  • Reward
  • Comment
  • Repost
  • Share
sam built openai on ONE assumption
running your own model would always be too expensive
llama and mistral are wrecking his thesis and threatening his biz model
sam can see it, read the same leak threads you’re reading... and still can’t restructure
the moment openai opens the weights, they kill the moat that justifies the valuation and anyone runs inference locally with no API call, no subscription, no revenue event for openai
every fork and fine-tune is one fewer customer paying sam’s margins
  • Reward
  • Comment
  • Repost
  • Share
sam altman raised $122b and nobody wants to buy $600m of it
$600m in openai shares sitting on the secondary with no bids
> banks waiving fees just to move supply
> sellers competing on price to get out
primary markets are controlled allocation: VCs get called into rounds when the cap table looks clean
but secondary markets run on voluntary demand: no one has to hold and no one has to bid
anthropic surpassed its last round valuation while openai can't clear a $600m ceiling... this tells you which company people care about
the telephone was worse than the telegraph on Day 1 with shorter range, l
  • Reward
  • Comment
  • Repost
  • Share
your cloud bill is about to spike and you haven't changed a single line of code
claude writes code, opens apps, finds bugs, fixes them, ships. no human in the loop
every autonomous agent is a permanent GPU session
> human devs work 8h/day. agents work 24
> 10M devs x 24/7 = 240M GPU-hours/day nobody budgeted for
sam's raising trillions for data centers that take 3 years to build. they'll be full before they're finished
  • Reward
  • Comment
  • Repost
  • Share
sam's $1T IPO is the strongest case for decentralized compute, but he just doesn't know it yet
wall street is pricing intelligence like oil fields. massive fixed costs, long-term rent on every API call
that model only works while control stays centralized for compute, model weights, access and pricing
but intelligence is starting to behave like a utility, and utilities historically don't stay closed:
> compute demand growing faster than supply
> idle GPUs sitting unused globally
> latency pushing inference closer to users
the demand that built aws's monopoly is now bigger than aws can serve. i
  • Reward
  • Comment
  • Repost
  • Share
@WatcherGuru the part you like to ignore is who controls it
if 4 companies generate 99% of that content, you got a monopoly on reality
intelligence at scale either becomes permissionless… or it gets “curated” for you
and most people won’t notice the difference until it’s too late
  • Reward
  • Comment
  • Repost
  • Share
I keep seeing this take about engineers needing to burn tokens to justify their salary.
That’s backwards.
Tokens are not output. They’re cost.
What matters is how much useful work gets done per unit of compute, not how fast you can light money on fire.
A strong engineer compresses the loop.
Less tokens.
Lower latency.
More iteration cycles.
Better decisions.
The weak one brute forces it.
More tokens.
More noise.
Same confusion, just scaled.
Compute is becoming a resource constraint again.
Not because we don’t have enough.
But because most people don’t know how to use it efficiently.
  • Reward
  • Comment
  • Repost
  • Share
.@tether making a US-regulated stablecoin might give crypto the use-case it never had
> sending $500 to your family across a border still costs 6.5% in fees.
> SWIFT takes 4 business days, and every bank in the chain takes a cut you never see.
> ACH doesn't work weekends. initiate Friday, lands Tuesday.
DeFi fixed the speed, but no one IRL can read a TX hash and everyone in crypto hates doing their taxes by scrolling through thousands of transaction across multiple wallets.
a regulated programmable dollar fixes both sides. instant, borderless, readable by humans AND machines.
the compute layer
  • Reward
  • Comment
  • Repost
  • Share
Amateurs use AI as an output. Pros use it as an input.
I spend 5-6 hrs/day inside Claude. Trained it w/ every board deck, every internal doc, every research paper I have... it pressure-tests OKRs until they snap.
Most founders I talk to still use AI to summarize emails.
If it hasn't disagreed w/ you this week, you're not using a tool, you're talking to a "yes man."
You'd fire someone on your team who never challenged a single decision or had an original thought.
But when your AI agrees with you daily, you call it productivity.
  • Reward
  • Comment
  • Repost
  • Share
Six years ago data centers were the most boring buildings in the world.
In six years they'll be the most contested.
I ran a machine learning fintech before @ionet and watched this play out in real time.
When AI demand started doubling every 3.4 months, those warehouses stopped being cost centers and became strategic terrain... like ports, pipelines, and oil reserves.
And we're building almost all of it in the same places: Northern Virginia, a few corridors in Texas, Oregon.
The same power grids, water systems, fiber routes, and jurisdictions. Export controls showed us how govs can flip a switc
  • Reward
  • Comment
  • Repost
  • Share
Claude went from #42 to #1 on the App Store in 2 months.
Not because the model improved, but because Anthropic wouldn't build weapons for the Pentagon.
2 years ago the top 3 was Temu, CapCut, and an HBO app.
Now it's three AI interfaces and #1 was decided by a political dispute, not a product update.
Claude runs on AWS, ChatGPT on Azure and Gemini on Google Cloud. Three apps with the same compute landlords.
AI governance is now an App Store ranking factor, and nobody voted on it.
  • Reward
  • Comment
  • Repost
  • Share
Half the Anthropic hype on the TL right now is people who just switched from OpenAI last week.
Switching from one provider to another doesn’t solve anything. It just rotates the gatekeeper.
The stack still looks like this:
- One company owns the model
- One company controls the compute
- One company decides the policies
- Everyone else rents access
I've watched startups spend 80% of their raise on GPU time from a single provider. One pricing change and their runway cuts in half.
Does intelligence stay inside corporate clouds or become part of an open network?
Open-weight models matter because
  • Reward
  • Comment
  • Repost
  • Share
Safety and alignment committees are filled with morons.  They're trying to make AI behave like a fragile human.
I'll admit, I have an aggressive work style.
When I’m in flow, I push hard. I reject drafts instantly. I criticize ideas bluntly.
Results matter more than emotions.
That intensity is how great work gets done.
But you can't always do that with people. Feelings get involved. People get defensive. The feedback loop slows down.
That’s one of the reasons I love working with AI.
It doesn’t have ego. It doesn’t get offended. You can push it as hard as you want.
You can say “this is terrible
  • Reward
  • Comment
  • Repost
  • Share
0.3% of the world pays for AI.
X has you thinking you're part of the "permanent underclass" if you don't automate everything by this week. Everyone building AI into actual work is a statistical blip.
If your grandma just learned how to use email, she's not paying $20/m for AI. That uncle that calls you to set up his Netflix is at most gonna be using ChatGPT like a search engine.
6 months ago people started saying Claude Code was actually good.
Distribution hasn't had time to catch up. When it does, AI disappears into everything.
Your grandma and your uncle will use AI... and they won't even kn
  • Reward
  • Comment
  • Repost
  • Share
MiniMax M2.5 just hit >80% on SWE-Bench Verified. First open-weight model to get there.
You download the weights, and then what?
You still need GPUs.
"Open" doesn't mean anything if running it means a queue controlled by three cloud providers.
I've been building @ionet for exactly this.
The model problem is getting solved.
The compute problem is getting worse.
  • Reward
  • Comment
  • Repost
  • Share
A lot of founders just lost their favorite excuse.
“I'm not technical enough” was never why they failed.
AI solved the how.
It didn’t solve the why.
You can vibe-code a functional product in 48 hours. You can’t vibe-code PMF.
AI gets you from zero to one, overnight.
But I’ve watched enough startups die between one and ten.
No friction to build means infinite competition for attention.
  • Reward
  • Comment
  • Repost
  • Share
Two Opus camps right now
(and they hate each other)
"4.6 for everything"
"4.5 until you need it"
If you switched to Opus 4.6 for everything, you didn't optimize for intelligence, you optimized for your compute bill.
The 4.5ers understand something the 4.6ers don't: speed compounds, overthinking doesn't.
I've been running 4.5 for most of my work. Intelligence got cheap. The compute underneath it didn't. The model you can afford to run all day beats the one you can't.
  • Reward
  • Comment
  • Repost
  • Share
Every time someone calls AI x Crypto "just a narrative," I ask one question:
How does an autonomous agent pay for compute without a blockchain?
They never have an answer.
Because there isn't one.
And paying is just the start.
Once the agent settles, it needs to prove the work actually happened.
Then coordinate with thousands of other agents across systems that don't trust each other.
Pay, prove, coordinate.
Blockchains solve all three. The rails already exist.
  • Reward
  • Comment
  • Repost
  • Share
  • Pin