a16z (Andreessen Horowitz) recently released a list of potential “big ideas” in the tech field by 2026, proposed by partners from its Apps, American Dynamism, Biotechnology, Cryptocurrency, Growth, Infrastructure, and Speedrun teams.
Below are some selected big ideas and insights from special contributors in the cryptocurrency space, covering topics from intelligent agents and artificial intelligence (AI), stablecoins, tokenization and finance, privacy and security, to prediction markets and other applications. If you want to learn more about the technological outlook for 2026, please read the full article.
Building the Future
Trading platforms are just the beginning, not the end
Today, aside from stablecoins and some core infrastructure, nearly all successful crypto companies have already transitioned into or are moving toward becoming trading platforms. However, if “every crypto company turns into a trading platform,” what will be the ultimate result? Intense homogenous competition will not only distract users but may also leave only a few winners. Companies that shift to trading too early might miss the opportunity to build more competitive and sustainable business models.
I fully understand the difficult position founders face in maintaining healthy finances, but chasing short-term product-market fit can come at a cost. In the crypto industry, this issue is especially prominent because the unique dynamics around tokens and speculation often lead founders down the path of “instant gratification,” like a “cotton candy test.”
Trading itself isn’t wrong—it is indeed an important function of market operation—but it isn’t necessarily the ultimate goal. Founders who focus on the product itself and seek product-market fit from a long-term perspective may ultimately become bigger winners.
– Arianna Simpson, General Partner of a16z Crypto Team
New thoughts on stablecoins, RWA tokenization, payments, and finance
Rethinking Real-World Asset (RWA) Tokenization and Stablecoins in a More Crypto-Native Way
We have seen banks, fintech firms, and asset managers show strong interest in bringing US stocks, commodities, indices, and other traditional assets onto the blockchain. However, as more traditional assets are introduced to blockchain, their tokenization often takes a “physicalization” approach—based on existing real-world asset concepts—without fully leveraging crypto-native features.
In contrast, synthetic assets like perpetual futures (perps) can offer deeper liquidity and are easier to implement. Perps also provide an intuitive leverage mechanism, making them perhaps the most fitting native derivatives for crypto markets today. Emerging market stocks might be one of the most interesting asset classes to “perpify.” For example, for some stocks, the liquidity in zero-dated (0DTE) options markets often exceeds that of the spot market, making “perpification” a promising experiment.
Ultimately, this is a question of “perpify vs. tokenization”; regardless, we can expect to see more crypto-native real-world asset tokenizations in the coming year.
Similarly, in 2026, the stablecoin space will see more “issuance innovations,” not just tokenization. Stablecoins became mainstream in 2025, with issuance volumes continuing to grow.
However, stablecoins lacking strong credit infrastructure resemble “narrow banks”—holding specific high-liquidity, deemed extremely safe assets. While narrow banks are effective products, I don’t believe they will be the long-term backbone of on-chain economies.
We have seen many emerging asset managers, curators, and protocols push for on-chain asset-backed loans collateralized by off-chain assets. Usually, these loans are first generated off-chain and then tokenized. But I believe this method has limited advantages, mainly in distributing to already on-chain users. Therefore, debt assets should be generated directly on-chain, not first off-chain and then tokenized. On-chain debt creation can reduce lending service costs, backend infrastructure costs, and improve accessibility. The challenge lies in compliance and standardization, but developers are actively working to address these issues.
– Guy Wuollet, General Partner of a16z Crypto Team
Stablecoins driving core banking ledger upgrades and opening new payment scenarios
Today, most banks still run outdated legacy systems that are hard for modern developers to recognize: as early as the 1960s and 1970s, banks were early adopters of large-scale software systems. By the 1980s and 1990s, second-generation core banking software emerged (e.g., Temenos GLOBUS and Infosys Finacle). However, these systems are aging, and upgrades are too slow. As a result, many critical core ledgers—databases recording deposits, collateral, and other obligations—still run on mainframes programmed in COBOL, relying on batch interfaces rather than modern APIs.
Most assets worldwide are stored in these decades-old core ledgers. While these systems have proven reliable, gained regulatory trust, and are deeply embedded in complex banking operations, they also hinder innovation. For example, adding real-time payment features can take months or years, compounded by technical debt and regulatory complexity.
This is where stablecoins come into play. Over the past few years, stablecoins have found product-market fit and successfully entered mainstream finance. This year, traditional financial institutions have embraced stablecoins at a new level. Financial tools like stablecoins, tokenized deposits, tokenized government bonds, and on-chain bonds enable banks, fintechs, and financial institutions to develop new products and serve more customers. More importantly, these innovations do not require rewriting legacy systems—despite their age, these systems have operated stably for decades. Stablecoins thus offer a new avenue for institutional innovation.
– Sam Broner
The future of intelligent agents and AI
Using AI to perform substantive research tasks
As a mathematical economist, earlier this year I found it very difficult to get consumer-grade AI models to understand my workflows; by November, I could give the models abstract instructions akin to a PhD student… and sometimes they returned entirely new and correctly executed answers. Moreover, we are seeing AI being used more broadly in research—especially in reasoning, where AI models now can not only assist discovery but also autonomously solve Putnam problems (perhaps the hardest university math exams in the world).
What remains unclear is which research areas will benefit most from this assistance and how. I expect AI’s research capabilities will foster and inspire a new “polymath” style of research: one that tends to hypothesize relationships between ideas and quickly reason from more hypothetical answers. These answers may not be fully accurate but can at least point in the right direction within certain logical frameworks. Ironically, this approach is somewhat like leveraging the “hallucination” power of models: when these models become sufficiently “smart,” allowing them to freely explore the abstract space—even if they generate nonsense—can sometimes lead to breakthroughs, much like human creativity when breaking free from linear thinking and clear directions.
Thinking this way requires a new AI workflow—not just a “proxy-to-proxy” mode, but a more complex “proxy-wrapping-proxy” mode—where different layers of models assist researchers in evaluating earlier models’ proposals and distilling valuable insights. I have used this method to write papers, while others use it for patent searches, inventing new art forms, and (regrettably) discovering new smart contract attack vectors.
However, enabling this “wrapping inference proxy” research mode requires better interoperability between models and a way to identify and fairly compensate each model’s contribution—and these are precisely problems that encryption technology can help solve.
– Scott Kominers, Member of a16z Crypto Research Team, Professor at Harvard Business School
The invisible tax AI agents impose on open networks
With the rise of AI agents, an “invisible tax” is pressing down on open networks, fundamentally disrupting their economic foundations. This interference stems from the increasing asymmetry between the internet’s contextual layer and execution layer: currently, AI agents extract data from ad-supported content sites (the contextual layer), providing convenience to users while systematically bypassing the revenue sources (ads and subscriptions) that support content creation.
To prevent further decline of open networks (and protect the diversity of content fueling AI), we need large-scale deployment of technological and economic solutions. This could include next-generation sponsored content, micro-attribution systems, or other innovative funding models. Existing AI licensing protocols are shown to be only temporary measures, often only compensating content creators a small fraction of the revenue lost to AI traffic.
The web needs a new techno-economic model where value can flow automatically. The most critical shift next year will be moving from static licensing to real-time usage-based compensation. This involves testing and scaling systems—possibly leveraging blockchain-supported micro-payments (nanopayments) and complex attribution standards—to automatically reward all entities contributing to AI agents’ successful task completion.
– Liz Harkavy, a16z Crypto Investment Team
Privacy as a moat
Privacy will become the most important moat in the crypto space
Privacy is one of the key features driving the onboarding of global finance onto the blockchain. Yet, it remains a crucial element that almost all current blockchains lack. For most blockchains, privacy issues are often considered an afterthought.
But today, privacy itself has become a sufficiently differentiating feature for blockchains. More importantly, privacy can create a “chain lock-in” effect—or a privacy network effect. Especially in an era where performance competition is no longer a decisive advantage, privacy becomes even more critical.
With cross-chain bridge protocols, as long as all information is public, migrating between chains is straightforward. But once privacy is introduced, this convenience disappears: transferring tokens across chains is easy, but transferring privacy is extremely difficult. Users moving from one privacy chain to another—or between privacy chains—face risks, as observers monitoring on-chain data, mempools, or network traffic can infer user identities. Crossing the boundaries between privacy chains and public chains, or even between different privacy chains, leaks metadata such as transaction times and amounts, which can make user tracking easier.
Compared to many homogenous new chains, these chains’ transaction fees may be driven down to near zero due to competition, while privacy-enabled blockchains can generate stronger network effects. The reality is, if a “general-purpose” blockchain lacks a mature ecosystem, killer apps, or unfair distribution advantages, there is little reason for users to choose or build on it, let alone develop loyalty.
On public blockchains, users can easily transact with others across chains—they join whichever chain they prefer. But on private blockchains, the choice of which chain to join is critical, because once joined, users are less likely to transfer to other chains to avoid privacy risks. This creates a “winner-takes-all” dynamic. Since privacy is vital for most real-world applications, a few privacy chains may eventually dominate the crypto space.
– Ali Yahya, General Partner of a16z Crypto Team
Other industries and applications
Prediction markets will become bigger, broader, and smarter
Prediction markets are gradually mainstreaming, and in the coming year, with the intersection of crypto and AI, they will grow larger, be applied more broadly, and become smarter, bringing new significant challenges for developers.
First, more contracts will be listed in prediction markets. This means we can not only get real-time odds on major elections or geopolitical events but also predict detailed outcomes and complex cross-events. As these new contracts uncover more information and integrate into news ecosystems (a trend already underway), they will raise important societal issues, such as how to balance information value and how to design these markets to be more transparent and auditable—problems solvable with encryption technology.
To handle the influx of new contracts, we need new ways to reach consensus on real-world events to settle these contracts. Centralized solutions (e.g., confirming whether an event actually occurred) are important but have limitations, as shown by contentious cases like Zelensky’s lawsuit market and Venezuela’s election market. To address edge cases and help prediction markets expand into more practical applications, new decentralized governance mechanisms and large language model (LLM) oracle systems can assist in verifying disputed outcomes.
AI’s potential isn’t limited to LLM-driven oracles. For example, active AI agents on these platforms can gather signals globally to gain short-term trading advantages. This can help us view the world from new perspectives and more accurately forecast future trends. (Projects like Prophet Arena have already generated excitement in this field.) Besides serving as complex political analysts providing insights, these AI agents may also reveal fundamental predictive factors behind complex social events as we study their emerging strategies.
Will prediction markets replace polls? No. Instead, they will improve polls (and poll data can be fed into prediction markets). As a political economy professor, I am most excited about prediction markets working synergistically with a vibrant polling ecosystem—but this depends on new technologies like AI, which can improve survey experiences, and encryption, which can offer new ways to verify that survey and poll participants are human, not bots.
– Andy Hall, a16z Crypto Research Advisor, Professor of Political Economy at Stanford University
Encryption technology will expand into entirely new applications beyond blockchains
For years, SNARKs (Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge, a cryptographic proof system that can verify computations without re-executing them) have been primarily used in blockchain. This is because their computational overhead is enormous: proving a computation can be 1 million times more expensive than just running it directly. In scenarios requiring thousands of verifiers, this overhead is justified, but elsewhere it is impractical.
That situation is about to change. By 2026, zkVM (Zero-Knowledge Virtual Machine) proof systems will reduce the computational overhead to about 10,000 times, with memory usage only a few hundred megabytes—fast enough to run on smartphones and cheap enough for widespread use. Here’s why “10,000 times” might be a critical threshold: high-end GPUs have a parallel throughput roughly 10,000 times that of a laptop CPU. By the end of 2026, a single GPU will be able to generate proofs of CPU-executed computations in real time.
This will unlock visions from early research papers: verifiable cloud computing. If you’re already running CPU workloads in the cloud (because your tasks are too small for GPU acceleration, or you lack expertise, or due to historical reasons), you’ll be able to obtain cryptographic proofs of computation correctness at reasonable costs. Moreover, proof systems are already optimized for GPUs, so your code won’t need extra adjustments.
– Justin Thaler, Member of a16z Crypto Research Team, Associate Professor of Computer Science at Georgetown University
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
a16z 2026 Cryptocurrency Industry 8 Major Trends Forecast: Rise of Privacy Chains, Transformation of Trading Platforms, and More
Author: a16z
Translation: Deep潮 TechFlow
a16z (Andreessen Horowitz) recently released a list of potential “big ideas” in the tech field by 2026, proposed by partners from its Apps, American Dynamism, Biotechnology, Cryptocurrency, Growth, Infrastructure, and Speedrun teams.
Below are some selected big ideas and insights from special contributors in the cryptocurrency space, covering topics from intelligent agents and artificial intelligence (AI), stablecoins, tokenization and finance, privacy and security, to prediction markets and other applications. If you want to learn more about the technological outlook for 2026, please read the full article.
Building the Future
Trading platforms are just the beginning, not the end
Today, aside from stablecoins and some core infrastructure, nearly all successful crypto companies have already transitioned into or are moving toward becoming trading platforms. However, if “every crypto company turns into a trading platform,” what will be the ultimate result? Intense homogenous competition will not only distract users but may also leave only a few winners. Companies that shift to trading too early might miss the opportunity to build more competitive and sustainable business models.
I fully understand the difficult position founders face in maintaining healthy finances, but chasing short-term product-market fit can come at a cost. In the crypto industry, this issue is especially prominent because the unique dynamics around tokens and speculation often lead founders down the path of “instant gratification,” like a “cotton candy test.”
Trading itself isn’t wrong—it is indeed an important function of market operation—but it isn’t necessarily the ultimate goal. Founders who focus on the product itself and seek product-market fit from a long-term perspective may ultimately become bigger winners.
– Arianna Simpson, General Partner of a16z Crypto Team
New thoughts on stablecoins, RWA tokenization, payments, and finance
Rethinking Real-World Asset (RWA) Tokenization and Stablecoins in a More Crypto-Native Way
We have seen banks, fintech firms, and asset managers show strong interest in bringing US stocks, commodities, indices, and other traditional assets onto the blockchain. However, as more traditional assets are introduced to blockchain, their tokenization often takes a “physicalization” approach—based on existing real-world asset concepts—without fully leveraging crypto-native features.
In contrast, synthetic assets like perpetual futures (perps) can offer deeper liquidity and are easier to implement. Perps also provide an intuitive leverage mechanism, making them perhaps the most fitting native derivatives for crypto markets today. Emerging market stocks might be one of the most interesting asset classes to “perpify.” For example, for some stocks, the liquidity in zero-dated (0DTE) options markets often exceeds that of the spot market, making “perpification” a promising experiment.
Ultimately, this is a question of “perpify vs. tokenization”; regardless, we can expect to see more crypto-native real-world asset tokenizations in the coming year.
Similarly, in 2026, the stablecoin space will see more “issuance innovations,” not just tokenization. Stablecoins became mainstream in 2025, with issuance volumes continuing to grow.
However, stablecoins lacking strong credit infrastructure resemble “narrow banks”—holding specific high-liquidity, deemed extremely safe assets. While narrow banks are effective products, I don’t believe they will be the long-term backbone of on-chain economies.
We have seen many emerging asset managers, curators, and protocols push for on-chain asset-backed loans collateralized by off-chain assets. Usually, these loans are first generated off-chain and then tokenized. But I believe this method has limited advantages, mainly in distributing to already on-chain users. Therefore, debt assets should be generated directly on-chain, not first off-chain and then tokenized. On-chain debt creation can reduce lending service costs, backend infrastructure costs, and improve accessibility. The challenge lies in compliance and standardization, but developers are actively working to address these issues.
– Guy Wuollet, General Partner of a16z Crypto Team
Stablecoins driving core banking ledger upgrades and opening new payment scenarios
Today, most banks still run outdated legacy systems that are hard for modern developers to recognize: as early as the 1960s and 1970s, banks were early adopters of large-scale software systems. By the 1980s and 1990s, second-generation core banking software emerged (e.g., Temenos GLOBUS and Infosys Finacle). However, these systems are aging, and upgrades are too slow. As a result, many critical core ledgers—databases recording deposits, collateral, and other obligations—still run on mainframes programmed in COBOL, relying on batch interfaces rather than modern APIs.
Most assets worldwide are stored in these decades-old core ledgers. While these systems have proven reliable, gained regulatory trust, and are deeply embedded in complex banking operations, they also hinder innovation. For example, adding real-time payment features can take months or years, compounded by technical debt and regulatory complexity.
This is where stablecoins come into play. Over the past few years, stablecoins have found product-market fit and successfully entered mainstream finance. This year, traditional financial institutions have embraced stablecoins at a new level. Financial tools like stablecoins, tokenized deposits, tokenized government bonds, and on-chain bonds enable banks, fintechs, and financial institutions to develop new products and serve more customers. More importantly, these innovations do not require rewriting legacy systems—despite their age, these systems have operated stably for decades. Stablecoins thus offer a new avenue for institutional innovation.
– Sam Broner
The future of intelligent agents and AI
Using AI to perform substantive research tasks
As a mathematical economist, earlier this year I found it very difficult to get consumer-grade AI models to understand my workflows; by November, I could give the models abstract instructions akin to a PhD student… and sometimes they returned entirely new and correctly executed answers. Moreover, we are seeing AI being used more broadly in research—especially in reasoning, where AI models now can not only assist discovery but also autonomously solve Putnam problems (perhaps the hardest university math exams in the world).
What remains unclear is which research areas will benefit most from this assistance and how. I expect AI’s research capabilities will foster and inspire a new “polymath” style of research: one that tends to hypothesize relationships between ideas and quickly reason from more hypothetical answers. These answers may not be fully accurate but can at least point in the right direction within certain logical frameworks. Ironically, this approach is somewhat like leveraging the “hallucination” power of models: when these models become sufficiently “smart,” allowing them to freely explore the abstract space—even if they generate nonsense—can sometimes lead to breakthroughs, much like human creativity when breaking free from linear thinking and clear directions.
Thinking this way requires a new AI workflow—not just a “proxy-to-proxy” mode, but a more complex “proxy-wrapping-proxy” mode—where different layers of models assist researchers in evaluating earlier models’ proposals and distilling valuable insights. I have used this method to write papers, while others use it for patent searches, inventing new art forms, and (regrettably) discovering new smart contract attack vectors.
However, enabling this “wrapping inference proxy” research mode requires better interoperability between models and a way to identify and fairly compensate each model’s contribution—and these are precisely problems that encryption technology can help solve.
– Scott Kominers, Member of a16z Crypto Research Team, Professor at Harvard Business School
The invisible tax AI agents impose on open networks
With the rise of AI agents, an “invisible tax” is pressing down on open networks, fundamentally disrupting their economic foundations. This interference stems from the increasing asymmetry between the internet’s contextual layer and execution layer: currently, AI agents extract data from ad-supported content sites (the contextual layer), providing convenience to users while systematically bypassing the revenue sources (ads and subscriptions) that support content creation.
To prevent further decline of open networks (and protect the diversity of content fueling AI), we need large-scale deployment of technological and economic solutions. This could include next-generation sponsored content, micro-attribution systems, or other innovative funding models. Existing AI licensing protocols are shown to be only temporary measures, often only compensating content creators a small fraction of the revenue lost to AI traffic.
The web needs a new techno-economic model where value can flow automatically. The most critical shift next year will be moving from static licensing to real-time usage-based compensation. This involves testing and scaling systems—possibly leveraging blockchain-supported micro-payments (nanopayments) and complex attribution standards—to automatically reward all entities contributing to AI agents’ successful task completion.
– Liz Harkavy, a16z Crypto Investment Team
Privacy as a moat
Privacy will become the most important moat in the crypto space
Privacy is one of the key features driving the onboarding of global finance onto the blockchain. Yet, it remains a crucial element that almost all current blockchains lack. For most blockchains, privacy issues are often considered an afterthought.
But today, privacy itself has become a sufficiently differentiating feature for blockchains. More importantly, privacy can create a “chain lock-in” effect—or a privacy network effect. Especially in an era where performance competition is no longer a decisive advantage, privacy becomes even more critical.
With cross-chain bridge protocols, as long as all information is public, migrating between chains is straightforward. But once privacy is introduced, this convenience disappears: transferring tokens across chains is easy, but transferring privacy is extremely difficult. Users moving from one privacy chain to another—or between privacy chains—face risks, as observers monitoring on-chain data, mempools, or network traffic can infer user identities. Crossing the boundaries between privacy chains and public chains, or even between different privacy chains, leaks metadata such as transaction times and amounts, which can make user tracking easier.
Compared to many homogenous new chains, these chains’ transaction fees may be driven down to near zero due to competition, while privacy-enabled blockchains can generate stronger network effects. The reality is, if a “general-purpose” blockchain lacks a mature ecosystem, killer apps, or unfair distribution advantages, there is little reason for users to choose or build on it, let alone develop loyalty.
On public blockchains, users can easily transact with others across chains—they join whichever chain they prefer. But on private blockchains, the choice of which chain to join is critical, because once joined, users are less likely to transfer to other chains to avoid privacy risks. This creates a “winner-takes-all” dynamic. Since privacy is vital for most real-world applications, a few privacy chains may eventually dominate the crypto space.
– Ali Yahya, General Partner of a16z Crypto Team
Other industries and applications
Prediction markets will become bigger, broader, and smarter
Prediction markets are gradually mainstreaming, and in the coming year, with the intersection of crypto and AI, they will grow larger, be applied more broadly, and become smarter, bringing new significant challenges for developers.
First, more contracts will be listed in prediction markets. This means we can not only get real-time odds on major elections or geopolitical events but also predict detailed outcomes and complex cross-events. As these new contracts uncover more information and integrate into news ecosystems (a trend already underway), they will raise important societal issues, such as how to balance information value and how to design these markets to be more transparent and auditable—problems solvable with encryption technology.
To handle the influx of new contracts, we need new ways to reach consensus on real-world events to settle these contracts. Centralized solutions (e.g., confirming whether an event actually occurred) are important but have limitations, as shown by contentious cases like Zelensky’s lawsuit market and Venezuela’s election market. To address edge cases and help prediction markets expand into more practical applications, new decentralized governance mechanisms and large language model (LLM) oracle systems can assist in verifying disputed outcomes.
AI’s potential isn’t limited to LLM-driven oracles. For example, active AI agents on these platforms can gather signals globally to gain short-term trading advantages. This can help us view the world from new perspectives and more accurately forecast future trends. (Projects like Prophet Arena have already generated excitement in this field.) Besides serving as complex political analysts providing insights, these AI agents may also reveal fundamental predictive factors behind complex social events as we study their emerging strategies.
Will prediction markets replace polls? No. Instead, they will improve polls (and poll data can be fed into prediction markets). As a political economy professor, I am most excited about prediction markets working synergistically with a vibrant polling ecosystem—but this depends on new technologies like AI, which can improve survey experiences, and encryption, which can offer new ways to verify that survey and poll participants are human, not bots.
– Andy Hall, a16z Crypto Research Advisor, Professor of Political Economy at Stanford University
Encryption technology will expand into entirely new applications beyond blockchains
For years, SNARKs (Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge, a cryptographic proof system that can verify computations without re-executing them) have been primarily used in blockchain. This is because their computational overhead is enormous: proving a computation can be 1 million times more expensive than just running it directly. In scenarios requiring thousands of verifiers, this overhead is justified, but elsewhere it is impractical.
That situation is about to change. By 2026, zkVM (Zero-Knowledge Virtual Machine) proof systems will reduce the computational overhead to about 10,000 times, with memory usage only a few hundred megabytes—fast enough to run on smartphones and cheap enough for widespread use. Here’s why “10,000 times” might be a critical threshold: high-end GPUs have a parallel throughput roughly 10,000 times that of a laptop CPU. By the end of 2026, a single GPU will be able to generate proofs of CPU-executed computations in real time.
This will unlock visions from early research papers: verifiable cloud computing. If you’re already running CPU workloads in the cloud (because your tasks are too small for GPU acceleration, or you lack expertise, or due to historical reasons), you’ll be able to obtain cryptographic proofs of computation correctness at reasonable costs. Moreover, proof systems are already optimized for GPUs, so your code won’t need extra adjustments.
– Justin Thaler, Member of a16z Crypto Research Team, Associate Professor of Computer Science at Georgetown University
—— a16z Crypto Editorial Team