Vitalik on the possible future of Ethereum (5): The Purge

Special thanks to Justin Drake, Tim Beiko, Matt Garnett, Piper Merriam, Marius van der Wijden, and Tomasz Stanczak for their feedback and review.

One of the challenges that Ethereum faces is that the inflation and complexity of any blockchain protocol will increase over time by default. This happens in two places:

· Historical Data: Any transaction conducted at any moment in history and any account created will need to be permanently stored by all clients and downloaded by any new client in order to achieve complete synchronization with the network. This results in an increasing client load and synchronization time over time, even if the capacity of the chain remains unchanged.

· protocol functionality: Adding new functionality is much easier than removing old functionality, leading to increased code complexity over time.

In order to keep Ethereum sustainable in the long run, we need to apply strong counter-pressure to these two trends, dropping complexity and inflation over time. But at the same time, we need to preserve one of the key attributes that makes blockchain great: persistence. You can put a Non-fungible Token, a love letter in transaction call data, or a $1 million Smart Contract on-chain, enter a cave for ten years, and come out to find it still there, waiting for you to read and interact with. In order for DApps to confidently achieve full decentralization and remove the need for upgrading secret keys, they need to ensure that their dependencies won’t upgrade in a way that breaks them – especially L1 itself.

It is absolutely possible to strike a balance between these two needs, to minimize or reverse bloat, complexity, and decay while maintaining continuity, if we set our minds to it. Organisms can do this: while most organisms age over time, a lucky few do not. Even social systems can have extremely long lifespans. In some respects, Ethereum has already succeeded: proof of work has disappeared, the SELFDESTRUCT opcode has largely disappeared, and the beacon chain Node has stored old data for up to six months. Finding this path for Ethereum in a more general way and moving towards a long-term stable end result is the ultimate challenge for Ethereum’s long-term scalability, technical sustainability, and even security.

The Purge: Main objective.

· By reducing or eliminating the need for each Node to permanently store all historical records or even the final state, the client’s storage requirements are dropped.

· By eliminating unnecessary features to drop protocol complexity.

Table of Contents:

· History expiry

· State expiry

· Feature cleanup

History expiry

What problem does it solve?

At the time of writing, a fully synchronized ETH blockchain Node requires approximately 1.1 TB of disk space for the execution client, and an additional several hundred GB of disk space for the Consensus client. Most of this is historical data, including data about historical Blocks, transactions, and receipts, much of which dates back several years. This means that even if the Gas limit does not increase at all, the size of the Node will continue to grow by several hundred GB every year.

What is it, and how does it work?

A key simplification feature of the historical storage problem is that because each block points to the previous block through a hash link (and other structures), reaching consensus on the current block is sufficient to reach consensus on the history. As long as the network reaches consensus on the latest block, any historical block, transaction, or state (account balance, random number, code, storage) can be provided by any individual participant as well as a Merkle proof, and this proof allows anyone else to verify its correctness. Consensus is an N/2-of-N trust model, while history is an N-of-N trust model.

This provides us with many choices for how we store the history. A natural choice is a network where each Node stores only a small portion of the data. This is how the seed network has operated for decades: while the network stores and distributes millions of files in total, each participant only stores and distributes a few of them. Contrary to intuition, this approach may not even drop the robustness of the data. If we can make it more economical to run Nodes, we can establish a network with 100,000 Nodes, where each Node stores 10% of the history randomly, so each data will be replicated 10,000 times - exactly the same replication factor as the network with 10,000 Nodes, where each Node stores everything.

Now, Ethereum has begun to move away from the model of all nodes permanently storing all history. Consensus Blocks (the part related to Proof of Stake Consensus) only store about 6 months. Blobs are only stored for about 18 days. EIP-4444 aims to introduce a one-year storage period for historical blocks and receipts. The long-term goal is to establish a unified period (possibly around 18 days) during which each node is responsible for storing all content, and then build a peer-to-peer network consisting of Ethereum nodes to store old data in a distributed manner.

Erasure codes can be used to improve robustness while maintaining the same replication factor. In fact, Blob has already undergone erasure coding to support data availability sampling. The simplest solution is likely to reuse these Erasure codes and also put the execution and Consensus block data into the blob.

What are the connections with existing research?

· EIP-4444;

· Torrents and EIP-4444 ;

· Portal network;

· Portal network and EIP-4444 ;

· Portal distributed storage and retrieval of SSZ objects;

· How to increase gas limit (Paradigm).

What else needs to be done, what needs to be weighed?

The remaining main work includes building and integrating a specific distributed solution to store the history - at least the execution history, but eventually including Consensus and blob. The simplest solution is (i) to simply introduce existing torrent libraries, and (ii) to introduce the native solution of ETH’s native solution called the Portal Network. Once either of them is introduced, we can open EIP-4444. EIP-4444 itself does not require a hard fork, but it does require a new network protocol version. Therefore, it is valuable to enable it for all clients at the same time, otherwise there is a risk of client failure due to connecting to other nodes expecting to download the complete history but not actually obtaining it.

The main trade-off involves how we strive to provide ‘ancient’ historical data. The simplest solution is to stop storing ancient history tomorrow and rely on existing archival nodes and various centralized providers for replication. This is easy, but it weakens the position of the ETH network as a permanent record place. A more difficult but safer approach is to first build and integrate the torrent network to store historical records in a distributed manner. Here, ‘how hard we try’ has two dimensions:

  1. How do we work hard to ensure that the largest Node set does indeed store all the data?

  2. How deep is the historical storage integrated into the Depth protocol?

A highly paranoid approach to (1) would involve attestation: effectively requiring each Proof of Stake validator to store a certain proportion of their history and periodically check if they are doing so in an encryption manner. A more lenient approach is to set a voluntary standard for the percentage of history stored by each client.

For (2), the basic implementation only involves the work that has been completed today: the Portal has stored the ERA file containing the entire Ethereum history. A more thorough implementation will involve actually connecting it to the synchronization process so that if someone wants to synchronize the complete historical record storage node or archive node, they can achieve it through direct synchronization from the portal network even if there are no other archive nodes online.

How does it interact with other parts of the roadmap?

If we want to make running or starting a Node extremely easy, reducing historical storage requirements is arguably more important than statelessness: of the 1.1 TB required by the Node, about 300 GB is state and the remaining about 800 GB has become history. Only by implementing statelessness and EIP-4444 can we achieve the vision of running an ETH node on a smartwatch and setting it up in just a few minutes.

Limiting historical storage also makes it more feasible for newer ETH nodes to only support the latest version of the protocol, making them simpler. For example, many lines of code can now be safely removed as all empty storage slots created during the 2016 DoS attack have been deleted. Since the transition to Proof of Stake has become history, clients can safely remove all code related to Proof of Work.

State expiry

What problem does it solve?

Even if we eliminate the need for client-side storage of the transaction history, the client’s storage requirements will continue to rise, by about 50 GB per year, as the state continues to rise: account balance and random numbers, contract code, and contract storage. Users can pay a one-time fee, thus relieving the burden on present and future Ethereum clients.

It’s harder for status to ‘expire’ than history because EVM is fundamentally designed around the assumption that once a state object is created, it will always exist and can be read by any transaction at any time. If we introduce statelessness, some people think this issue may not be so bad: only specialized Block builder classes need to actually store state, while all other Nodes (including list comprehensions!) can run statelessly. However, there is a view that we do not want to rely too much on statelessness, and ultimately we may want states to expire to maintain the Decentralization of Ethereum.

What is it and how does it work

Today, when you create a new state object (which can happen in one of three ways: (i) sending ETH to a new account, (ii) creating a new account with code, (iii) setting an untouched storage slot before), the state object remains in that state forever. On the contrary, what we want is for the object to automatically expire over time. The key challenge is to achieve this in a way that accomplishes three goals:

  1. Efficiency: No large amount of additional calculations is needed to run to expiration process.

  2. User friendliness: If someone goes into a cave for five years and comes back, they should not lose access to ETH, ERC 20, Non-fungible Token, CDP positions…

  3. Developer friendliness: Developers do not need to switch to completely unfamiliar mental models. In addition, currently stagnant and not updated applications should be able to continue running normally.

If you don’t meet these goals, it’s easy to solve the problem. For example, you can have each state object also store an expiration date counter (the expiration date can be extended by burning ETH, which can happen automatically at any time read or write), and have a process that loops through the state to remove the expiration date of the state object. However, this introduces additional computing (and even storage requirements), and it certainly doesn’t meet the requirements for user-friendliness. It’s also hard for developers to reason about edge cases involving storage values that sometimes reset to zero. If you set an expiration timer within the scope of the contract, this will technically make the developer’s life easier, but it will make the economy more difficult: the developer will have to think about how to “pass on” the ongoing storage costs to the user.

These are the problems that the Ethereum core development community has been working hard to solve for many years, including proposals such as ‘Block Rent’ and ‘ReGenesis’. In the end, we combined the best parts of the proposals and focused on two types of ‘known least bad solutions’.

· Partial status expiration solution

Based on the expiration suggestion of the Address-based cycle.

Partial state expiry

Some of the expiration proposals follow the same principle. We divide the state into blocks. Everyone permanently stores the ‘top-level mapping’, where the block is either empty or non-empty. Only the data in each block is stored when it is recently accessed. There is a ‘resurrection’ mechanism if it is no longer stored.

The main differences between these proposals are: (i) how we define “recent”, and (ii) how we define “block”? One specific proposal is EIP-7736, which builds on the design of “stems and leaves” introduced for the Verkle tree (although compatible with any form of statelessness, such as binary trees). In this design, adjacent headers, code, and storage slots are stored under the same “trunk”. The data stored under a trunk can be up to 256 * 31 = 7,936 bytes. In many cases, the entire header and code of an account, as well as many key storage slots, will be stored under the same trunk. If the data under a given trunk has not been read or written in 6 months, the data will no longer be stored, but only the 32-byte commitment (“stub”) of the data will be stored. Future transactions that access the data will need to “resurrect” the data and provide proof of the check based on the stub.

There are other ways to achieve similar ideas. For example, if the granularity at the account level is not sufficient, we can devise a scheme where each 1/2 32 part of the tree is controlled by a similar stem-leaf mechanism.

Due to incentive factors, this becomes even more tricky: attackers can force clients to permanently store a large amount of state by putting a large amount of data into a single subtree and sending a single transaction each year to ‘update the tree’. If you make the renewal cost proportional to the size of the tree (or inversely proportional to the duration of the renewal), someone may harm other users by putting a large amount of data into the same subtree as theirs. People can try to limit these two issues by dynamically adjusting the granularity based on the size of the subtree: for example, every consecutive 2^16 = 65536 state objects can be considered a ‘group’. However, these ideas are more complex; stem-based methods are simple and can adjust incentives because all data under the stem is usually related to the same application or user.

Proposed Expiration of Status Based on Address Cycle

If we want to completely avoid any permanent rise in state, even a 32-byte stub, what should we do? Due to resurrection conflicts, this is a difficult problem: if a state object is deleted, later EVM execution will place another state object in the exact same position, but what if someone who cares about the original state object comes back and tries to recover it? When part of the state expires, the “stub” will prevent new data from being created. As the state completely expires, we can’t even store the stub.

The design based on the Address cycle is the most famous idea to solve this problem. We do not store the entire state in a status tree, but have a list of status trees that rise continuously, and any read or write state is saved in the latest status tree. A new empty status tree is added every period (e.g. 1 year). The old trees are frozen solid. The complete Node only stores the two most recent trees. If a state object is not touched within two cycles and falls into an expired tree, it can still be read or written, but the transaction needs to prove its Merkle proof - once proven, a copy will be saved again in the latest tree.

A key concept that makes it friendly for both users and developers is the concept of the Address period. The Address period is a number that belongs to the Address. The key rule is that an Address with a period of N can only be read or written during or after period N (i.e., when the state tree list reaches length N). If you want to save a new state object (e.g., a new contract or a new ERC 20 balance), you can save it immediately without providing evidence that nothing was there before if you make sure to put the state object into a contract with an Address period of N or N-1. On the other hand, any additions or edits made during the old Address period require proof.

This design retains most of the current attributes of Ethereum without requiring additional calculations, allowing applications to be written almost as they are now (ERC 20 needs to be rewritten to ensure that the balance of Address with a cycle of N is stored in the sub-contract with Address cycle N), solving the problem of ‘user entering the cave for five years’. However, it has a major problem: Address needs to be expanded to more than 20 bytes to accommodate the Address cycle.

Address space extension Address空间扩展

One proposal is to introduce a new 32-byte Address format, which includes the version number, Address cycle number, and extended hash.

0x01 (red) 0000 (orange) 000001 (green) 57 aE408398 dF7 E5 f4552091 A69125 d5dFcb7B8C2659029395bdF (blue).

The red is the version number. The four zeros in orange are intended to serve as blank spaces for future Sharding numbers. The green is the Address cycle number. The blue is a 26-byte hash value.

The key challenge here is backward compatibility. Existing contracts are designed around 20-byte Addresses and typically use strict byte-packing techniques, explicitly assuming that an Address is exactly 20 bytes long. One idea for addressing this involves conversion mappings, where legacy contracts interacting with modern Addresses will see the 20-byte hash value of the modern Address. However, ensuring its security presents significant complexity.

Address space contraction

Another approach is to take the opposite direction: we immediately ban some Address subranges of size 2^128 (e.g., all Addresses starting with 0xffffffff), and then introduce Addresses with Address cycles and 14-byte hash values within that range.

0xffffffff000169125 d5dFcb7B8C2659029395bdF

The main sacrifice made by this method is the introduction of the security risk of the counterfactual Address: an Address that holds assets or permissions, but its code has not been deployed on the chain. The risk involves someone creating an Address that claims to have a (not yet deployed) code, but there is also another valid code hashed to the same Address. Calculating such collisions today requires 2 80 hashes; Address space contraction would reduce this number to a more accessible 2 56 hash values.

In the key risk area, the anti-factual Address of the Wallet held by non-single owners is relatively rare today, but as we enter the multi-L2 world, it may become more common. The only solution is to simply accept this risk, but to identify all common use cases that may arise issues and propose effective solutions.

What are the connections with existing research?

Early proposal

Block chain cleaning;

Regeneration;

Ethereum state size management theory;

Possible paths for stateless and state expiration;

Some states expire proposals

EIP-7736 ;

Address Space Expansion Document

Original proposal;

Ipsilon review;

Blog post comments;

What will be destroyed if we lose collision resistance.

What else needs to be done, what needs to be weighed?

I believe there are four feasible paths for the future:

· We achieve statelessness and never introduce state expiration. The state is constantly rising (although slowly: we may not see it for decades beyond 8 TB), but it only needs to be held by relatively special categories of users: even PoS validators do not need state.

One feature that requires accessing part of the state is the inclusion list generation, but we can accomplish this operation in a decentralized manner: each user is responsible for maintaining the portion of the state tree that includes their own account. When they broadcast a transaction, they broadcast the transaction with a proof of the state object accessed during the validation process (this applies to EOA and ERC-4337 accounts). Then, a stateless validator can combine these proofs into a proof that includes the entire list.

· We expire part of the states and accept a much lower but still non-zero permanent state size growth rate. This result can be said to be similar to how historical expired proposals involving peer-to-peer networks accept much lower but still non-zero permanent historical storage growth rates, where each client must store a much lower but fixed percentage of historical data.

· We use Address space expansion to handle state expiration. This will involve a multi-year process to ensure the effectiveness and security of the Address format conversion method, including existing applications.

· We are shrinking the Address space to handle the expiration of states. This will involve a process of several years to ensure that all security risks involving Address conflicts (including Cross-Chain Interaction scenarios) are addressed.

One important point is that, regardless of whether the status expiration scheme that relies on Address format changes is implemented or not, the issue of Address space expansion and contraction must be addressed in the end. Today, generating Address collisions requires approximately 2^80 hash values. For extremely resource-rich participants, this computational load is already feasible: GPUs can perform approximately 2^27 hash values, so running for one year can calculate 2^52, meaning that around 230 GPUs in the world can calculate a collision in about 1/4 year, while FPGAs and ASICs can further accelerate this process. In the future, such attacks will be open to more and more people. Therefore, the actual cost of implementing a fully state expiration may not be as high as it seems, because we must address this very challenging Address problem in any case.

How does it interact with other parts of the roadmap?

The expiration of states may make it easier to convert from one state tree format to another, as no conversion process is needed: you can simply start using the new format to create a new tree, and then perform a hard fork to convert the old tree. Therefore, although state expiration is complex, it does indeed simplify other aspects of the roadmap.

Feature cleanup

What problem does it solve?

One of the key prerequisites for security, accessibility, and trust neutrality is simplicity. If the protocol is aesthetically pleasing and simple, it reduces the likelihood of errors. It increases the opportunity for new developers to participate in any part of it. It is more likely to be fair and also easier to resist special interests. Unfortunately, like any social system, the protocol defaults to becoming more complex over time. If we don’t want Ethereum to fall into an increasingly complex black hole, we need to do one of two things: (i) stop making changes and make the protocol rigid, (ii) be able to actually remove features and drop complexity. A middle ground is also possible, which is to make fewer changes to the protocol and gradually eliminate some complexity over time. This section will discuss how to reduce or eliminate complexity.

What is it, and how does it work?

No single major fix can drop the complexity of the protocol; the essence of this issue is that there are many small solutions.

A mostly complete example is the removal of the SELFDESTRUCT opcode, and can serve as a blueprint for how to handle other examples. The SELFDESTRUCT opcode is the only opcode that can modify an unlimited number of storage slots within a single block, requiring clients to implement significantly higher complexity to avoid DoS attacks. The initial purpose of this opcode was to achieve voluntary state clearing, allowing the state size to decrease over time. In practice, few people end up using it. The opcode has been weakened, only allowing self-destruct accounts created in the same transaction as the Dencun hard fork. This solves the DoS problem and can significantly simplify client code. In the future, it may make sense to completely remove the opcode altogether.

Some key examples of simplified opportunities for protocol that have been identified so far include the following. First, some examples outside of EVM; these are relatively non-intrusive, making it easier to achieve consensus and implementation in a shorter period of time.

· RLP → SSZ conversion: Initially, Ethereum objects were serialized using a coding called RLP. RLP is untyped and unnecessarily complex. Today, the beacon chain uses SSZ, which is significantly better in many ways, including not only supporting serialization but also supporting hash. Ultimately, we hope to completely get rid of RLP and move all data types to the SSZ structure, which in turn will make upgrades easier. Current EIPs include [ 1 ] [ 2 ] [ 3 ].

· Remove old transaction types: There are too many transaction types today, and many of them may be removed. A more moderate alternative to complete removal is the account abstraction feature, where smart accounts can contain code to process and validate old-style transactions (if they choose to).

· LOG reform: The log creates a bloom filter and other logic, increasing the complexity of the protocol, but in fact, it is not used by the client because it is too slow. We can remove these features and focus on alternative solutions, such as using modern technologies like SNARK to build a decentralized log reading tool outside the protocol.

· Finally removing the beacon chain sync committee mechanism: The sync committee mechanism was originally introduced to enable light client verification of the ETH chain. However, it significantly increases the complexity of the protocol. Eventually, we will be able to directly use SNARK to verify the ETH chain Consensus layer, which will eliminate the need for a dedicated light client verification protocol. Potentially, the changes to Consensus will allow us to remove the sync committee earlier by creating a more ‘native’ light client protocol that involves verifying signatures from a random subset of validators in the ETH chain Consensus.

· Unified data format: Today, the execution state is stored in the Merkle Patricia tree, Consensus state is stored in the SSZ tree, and the blob is committed through KZG commitment. In the future, it is meaningful to develop a single unified format for block data and a single unified format for state. These formats will meet all important requirements: (i) simple proof for stateless clients, (ii) serialization and erasure coding of data, (iii) standardized data structures.

· Delete the beacon chain committee: This mechanism was initially introduced to support the implementation of Sharding for a specific version. Instead, we ultimately implement Sharding through L2 and blob. Therefore, the committee is unnecessary, so the action to remove the committee is being taken.

· Remove mixed byte order: EVM is big-endian, the consensus layer is little-endian. It may make sense to reharmonize and make everything one way or the other (perhaps big-endian, as EVM is more difficult to change)

Now, some examples in EVM:

· Simplification of Gas mechanism: The current Gas rules have not been well optimized and cannot provide clear restrictions on the amount of resources required to validate a Block. Key examples in this regard include (i) the cost of storage reads/writes, which aims to limit the number of reads/writes in a block but is currently quite arbitrary, and (ii) memory padding rules, which are currently difficult to estimate the maximum memory consumption of the EVM. Proposed fixes include changes to the stateless Gas cost (unifying all storage-related costs into a simple formula) and a memory pricing proposal.

· Remove Precompiles: Many precompiles currently present in the Ethereum network are unnecessarily complex, relatively unused, and constitute a significant portion of Consensus failures, almost without being used by any application. Two approaches to addressing this issue are: (i) to simply remove the precompiles, and (ii) to replace them with a piece of (inevitably more expensive) EVM code that implements the same logic. This EIP draft suggests that the first step be taken to perform this operation on the identity precompile; later, RIPEMD 160, MODEXP, and BLAKE may become candidates for removal.

· Remove gas observability: make EVM execution no longer able to see how much gas is left. This will break some applications (most notably sponsored transactions), but it will make upgrades easier in the future (e.g., for more advanced versions of gas). The EOF specification has made Gas unobservable, but for the sake of simplifying the protocol, EOF needs to be mandatory.

· Improvement of static analysis: It is difficult to perform static analysis on EVM code now, especially because the jumps may be dynamic. This also makes it more difficult to optimize EVM implementation (pre-compiling EVM code into other languages). We can solve this problem by removing dynamic jumps (or making them more expensive, for example, the Gas cost of the total number of JUMPDESTs in the contract is linear). This is what EOF does, although enforcing EOF is necessary to obtain the benefits of protocol simplification.

What are the connections with existing research?

· Follow-up steps for clearing;

· Self-destruct

· SSZ to EIPS: [1] [2] [3];

· Stateless gas cost changes;

· Linear memory pricing;

· Precompiled deletion;

· Remove bloom filter;

· A method for off-chain secure log retrieval using incremental verifiable computation (read: recursive STARK);

What else needs to be done, what needs to be weighed?

The main trade-offs for such a functional simplification are (i) the extent and speed of our simplification and (ii) backward compatibility. The value of Ethereum as a chain comes from it being a platform where you can deploy applications and be confident that it will still work many years later. At the same time, this ideal may go too far, in the words of William Jennings Bryan, ‘nailing Ethereum to the cross of backward compatibility’. If only two applications in the entire Ethereum network use a given function, and one application has had zero users for many years, while the other application has hardly been used and has a total value of 57 US dollars, we should delete the function, and if necessary, pay the victim 57 US dollars out of our own pockets.

A broader social issue is to create a standardized pipeline for making non-urgent backward compatibility-breaking changes. One way to address this issue is to examine and extend existing precedents, such as the self-destruct process. The pipeline looks as follows:

  1. Start talking about deleting feature X;

  2. Analyze to determine the impact of deleting X on the application, depending on the specific results: (i) abandon the idea, (ii) continue as planned, or (iii) determine the ‘least disruptive’ way to delete X and continue;

  3. Formulate a formal EIP to deprecate X. Ensure that popular higher-level infrastructures (such as programming languages, Wallet) respect this and stop using this feature.

  4. Finally, delete X in practice;

There should be a long pipeline lasting for many years between step 1 and step 4, clearly indicating which projects are at which step. At this point, there needs to be a balance between the dynamism and speed of the feature removal process and the more conservative approach of investing more resources in other areas of protocol development, but we are still far from the Pareto frontier.

EOF

A set of major changes proposed for the EVM is the EVM Object Format (EOF). EOF introduces a lot of changes, such as prohibiting gas observability, code observability (i.e. no CODECOPY), and only allowing static jumps. The goal is to allow the EVM to undergo more upgrades in a way that has stronger properties, while maintaining backward compatibility (because the EVM before EOF still exists).

The advantage of doing this is that it creates a natural path for adding new EVM functionalities and encourages migration to a stricter EVM with stronger guarantees. The downside is that it significantly increases the complexity of the protocol unless we can find a way to eventually deprecate and remove the old EVM. One major question is: What role does EOF play in the EVM simplification proposal, especially if the goal is to reduce the complexity of the entire EVM?

How does it interact with other parts of the roadmap?

Many of the “improvement” suggestions in the rest of the roadmap are also opportunities to simplify legacy features. Repeat some of the examples above:

Switching to single-slot finality gives us the opportunity to cancel committees, redesign economics, and perform other simplifications related to Proof of Stake.

· Fully implementing account abstraction allows us to remove a large amount of existing transaction processing logic and move it to the ‘default account EVM code’ that can be replaced by all EOAs.

· If we transfer the Ethereum state to a binary hash tree, this can be coordinated with the new version of SSZ so that all Ethereum data structures can be hashed in the same way.

More aggressive approach: convert most of the protocol content into contract code

A more radical ETH strategy is to keep the protocol unchanged but transfer most of its functionality from the protocol to the contract code.

The most extreme version is to make ETH L1 ‘technically’ just a beacon chain and introduce a minimal Virtual Machine (such as RISC-V, Cairo, or a smaller one specifically for proof systems) to allow others to create their own aggregations. Then, the EVM will become the first in these aggregations. Ironically, this is identical to the result of the execution environment proposal in 2019-20, although SNARK makes it more feasible to implement.

A milder approach would be to keep the relationship between the beacon chain and the current ETH network execution environment unchanged, but to perform in-place replacement of EVM. We can choose RISC-V, Cairo, or other VM as the new “official ETH network VM,” and then forcibly convert all EVM contracts to new VM code that interprets the original code logic (via compilation or interpretation). In theory, this could even be accomplished by using a “target Virtual Machine” as a version of EOF.

Original Text Link

ETH-0,78%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)