Yesterday, Ethereum co-founder Vitalik published a radical article about the upgrade of the Ethereum execution layer (see “Vitalik’s Radical New Article: Execution Layer Expansion ‘No Pain, No Gain’, EVM Must Be Iterated in the Future”), in which he mentioned the hope to replace EVM with RISC-V as the virtual machine language for smart contracts.
Once this article was published, it immediately caused a stir in the Ethereum developer community, with several technical bigwigs expressing different views on the proposal. Shortly after the article was released, leading Ethereum developer levochka.eth wrote a lengthy rebuttal to Vitalik’s viewpoint in the comments, arguing that Vitalik made erroneous assumptions about the proof system and its performance, and that RISC-V may not be the best choice as it cannot balance “scalability” and “maintainability.”
The following is the original content of levochka.eth, compiled by Odaily Planet Daily.
Please do not do this.
This plan is not reasonable because you have made incorrect assumptions about the proof system and its performance.
Verify Hypothesis
From my understanding, the main arguments of the proposal are “scalability” and “maintainability.”
First, I would like to discuss maintainability.
In fact, all RISC-V zk-VMs need to use “precompiles” to handle compute-intensive operations. The list of precompiles for SP 1 can be found in Succinct’s documentation, and you will find that it covers almost all important “computational” opcodes in the EVM.
Therefore, any modifications to the underlying layer cryptographic primitives require writing and auditing new “circuits” for these precompiles, which is a significant limitation.
Indeed, if the performance is good enough, the maintainability of the “non-EVM” part in the execution client may be relatively easy. I’m not sure if the performance will be good enough, but I have low confidence in this part for the following reasons:
“State tree computation” can indeed be significantly accelerated through friendly precompiles (such as Poseidon).
However, it remains unclear whether “deserialization” can be handled in an elegant and maintainable manner.
In addition, there are some tricky details (such as Gas measurement and various checks) that may belong to “block evaluation time”, but should actually be classified as “non-EVM” parts, and these parts often face greater maintenance pressure.
Secondly, regarding the scalability aspect.
I need to reiterate one point, RISC-V cannot handle EVM workloads without using precompiled code, absolutely not.
So, while technically correct, the statement that “the final proof time will be dominated by the current precompile operation” in the original text assumes that there will be no precompilation in the future, when in fact (in this future scenario) precompilation will still exist, and is exactly the same as the computationally intensive opcodes in the EVM (such as signatures, hashes, and possibly large analogic operations).
Regarding the “Fibonacci” example, it is difficult to make a judgment without delving into the very underlying details, but its advantages at least partly stem from:
The difference between “Interpreter” and “execution overhead”;
Loop unrolling (reducing the “control flow” of RISC-V, whether Solidity can achieve this is still uncertain, but even a single opcode will still generate a large amount of control flow/memory access due to interpretation overhead);
Use smaller data types;
Here I want to point out that to achieve the advantages of point 1 and point 2, it is necessary to eliminate the “interpretation overhead.” This is consistent with the philosophy of RISC-V, but this is not the RISC-V we are currently discussing, rather a similar “(? ) RISC-V” — it needs to have certain additional capabilities, such as supporting the concept of contracts.
The question arises
So, there are some issues here.
To enhance maintainability, you need a RISC-V (with precompilation) that can compile EVM - this is basically the current situation.
To enhance scalability, a completely different approach is needed – a new architecture possibly similar to RISC-V that understands the concept of “contracts”, is compatible with Ethereum’s runtime limitations, and can execute contract code directly (without “interpretation overhead”).
I now assume you are referring to the second scenario (as the rest of the article seems to imply this). I need to remind you that all code outside of this environment will still be written in the current RISC-V zkVM language, which has significant implications for maintenance.
Other Possibilities
We can compile the bytecode from high-level EVM opcodes. The compiler is responsible for ensuring that the generated program maintains invariants, such as preventing “stack overflow.” I would like to see this demonstrated in a regular EVM. A correctly compiled SNARK can be provided along with the contract deployment instructions.
We can also build a “formal proof” to demonstrate that certain invariants are preserved. As far as I know, this method (rather than “virtualization”) has been used in certain browser contexts. By generating such a formal proof’s SNARK, you can achieve similar results.
Of course, the simplest choice is to just grit your teeth and do it…
Build a minimal “chain” MMU
You may have implicitly expressed this point in the article, but please allow me to remind you: If you want to eliminate virtualization overhead, you must execute the compiled code directly – this means you need to prevent the contract (now an executable program) from writing to the kernel (non-EVM implementation) memory in some way.
Therefore, we need some kind of “Memory Management Unit” (MMU). The paging mechanism of traditional computers may be unnecessary because the “physical” memory space is nearly infinite. This MMU should be as lean as possible (since it operates at the same level of abstraction as the architecture itself), but certain functions (such as “atomicity of transactions”) can be moved to that layer.
At this time, the provable EVM will become the kernel program running on this architecture.
RISC-V may not be the best choice
Interestingly, under all these conditions, the best “Instruction Set Architecture” (ISA) suitable for this task may not be RISC-V, but something similar to EOF-EVM. The reason is that:
The “small” opcode actually results in a large number of memory accesses, and existing proof methods are difficult to handle efficiently.
To reduce branching overhead, we demonstrated in our recent paper Morgana how to prove “static jumps” (similar to EOF) code at a precompiled performance level.
My suggestion is to build a new proof-friendly architecture with a minimal MMU that allows the contract to run as a separate executable. I don’t think it should be RISC-V, but rather a new ISA optimized for SNARK protocol limitations, and even an ISA that partially inherits a subset of EVM opcodes might be better - As we know, precompilation is here to stay, whether we like it or not, so RISC-V doesn’t bring any simplification here.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Developer refutes Vitalik: The premise is incorrect, RISC-V is not the best choice.
This article is from: Ethereum developer levochka.eth
Compiled by|Odaily Planet Daily(@OdailyChina);Translator|Azuma(@azuma_eth)
Editor’s Note:
Yesterday, Ethereum co-founder Vitalik published a radical article about the upgrade of the Ethereum execution layer (see “Vitalik’s Radical New Article: Execution Layer Expansion ‘No Pain, No Gain’, EVM Must Be Iterated in the Future”), in which he mentioned the hope to replace EVM with RISC-V as the virtual machine language for smart contracts.
Once this article was published, it immediately caused a stir in the Ethereum developer community, with several technical bigwigs expressing different views on the proposal. Shortly after the article was released, leading Ethereum developer levochka.eth wrote a lengthy rebuttal to Vitalik’s viewpoint in the comments, arguing that Vitalik made erroneous assumptions about the proof system and its performance, and that RISC-V may not be the best choice as it cannot balance “scalability” and “maintainability.”
The following is the original content of levochka.eth, compiled by Odaily Planet Daily.
Please do not do this.
This plan is not reasonable because you have made incorrect assumptions about the proof system and its performance.
Verify Hypothesis
From my understanding, the main arguments of the proposal are “scalability” and “maintainability.”
First, I would like to discuss maintainability.
In fact, all RISC-V zk-VMs need to use “precompiles” to handle compute-intensive operations. The list of precompiles for SP 1 can be found in Succinct’s documentation, and you will find that it covers almost all important “computational” opcodes in the EVM.
Therefore, any modifications to the underlying layer cryptographic primitives require writing and auditing new “circuits” for these precompiles, which is a significant limitation.
Indeed, if the performance is good enough, the maintainability of the “non-EVM” part in the execution client may be relatively easy. I’m not sure if the performance will be good enough, but I have low confidence in this part for the following reasons:
Secondly, regarding the scalability aspect.
I need to reiterate one point, RISC-V cannot handle EVM workloads without using precompiled code, absolutely not.
So, while technically correct, the statement that “the final proof time will be dominated by the current precompile operation” in the original text assumes that there will be no precompilation in the future, when in fact (in this future scenario) precompilation will still exist, and is exactly the same as the computationally intensive opcodes in the EVM (such as signatures, hashes, and possibly large analogic operations).
Regarding the “Fibonacci” example, it is difficult to make a judgment without delving into the very underlying details, but its advantages at least partly stem from:
Here I want to point out that to achieve the advantages of point 1 and point 2, it is necessary to eliminate the “interpretation overhead.” This is consistent with the philosophy of RISC-V, but this is not the RISC-V we are currently discussing, rather a similar “(? ) RISC-V” — it needs to have certain additional capabilities, such as supporting the concept of contracts.
The question arises
So, there are some issues here.
I now assume you are referring to the second scenario (as the rest of the article seems to imply this). I need to remind you that all code outside of this environment will still be written in the current RISC-V zkVM language, which has significant implications for maintenance.
Other Possibilities
We can compile the bytecode from high-level EVM opcodes. The compiler is responsible for ensuring that the generated program maintains invariants, such as preventing “stack overflow.” I would like to see this demonstrated in a regular EVM. A correctly compiled SNARK can be provided along with the contract deployment instructions.
We can also build a “formal proof” to demonstrate that certain invariants are preserved. As far as I know, this method (rather than “virtualization”) has been used in certain browser contexts. By generating such a formal proof’s SNARK, you can achieve similar results.
Of course, the simplest choice is to just grit your teeth and do it…
Build a minimal “chain” MMU
You may have implicitly expressed this point in the article, but please allow me to remind you: If you want to eliminate virtualization overhead, you must execute the compiled code directly – this means you need to prevent the contract (now an executable program) from writing to the kernel (non-EVM implementation) memory in some way.
Therefore, we need some kind of “Memory Management Unit” (MMU). The paging mechanism of traditional computers may be unnecessary because the “physical” memory space is nearly infinite. This MMU should be as lean as possible (since it operates at the same level of abstraction as the architecture itself), but certain functions (such as “atomicity of transactions”) can be moved to that layer.
At this time, the provable EVM will become the kernel program running on this architecture.
RISC-V may not be the best choice
Interestingly, under all these conditions, the best “Instruction Set Architecture” (ISA) suitable for this task may not be RISC-V, but something similar to EOF-EVM. The reason is that:
My suggestion is to build a new proof-friendly architecture with a minimal MMU that allows the contract to run as a separate executable. I don’t think it should be RISC-V, but rather a new ISA optimized for SNARK protocol limitations, and even an ISA that partially inherits a subset of EVM opcodes might be better - As we know, precompilation is here to stay, whether we like it or not, so RISC-V doesn’t bring any simplification here.