Original title: [DappLearning] Vitalik Buterin Chinese Interview
Original author: DappLearning
On April 7, 2025, Vitalik and Xiao Wei appeared together at the Pop-X HK Research House event co-organized by DappLearning, ETHDimsum, Panta Rhei and UETH.
During the event, Yan, the initiator of the DappLearning community, interviewed Vitalik, covering topics such as ETH POS, Layer2, cryptography, and AI . The interview was conducted in Chinese, and Vitalik spoke fluent Chinese.
The following is the content of the interview (the original content has been reorganized for easier reading and understanding):
01 Opinions on POS upgrade
Yan:
Hello Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.
I started to learn about Ethereum in 2017. I remember that in 2018 and 2019, everyone had a very heated discussion about POW and POS. Perhaps this topic will continue to be discussed.
From now on, (ETH) POS has been running stably for more than four years, and there are millions of validators in the consensus network. But at the same time, the exchange rate of ETH to BTC has been falling all the way, which has both positive aspects and some challenges.
So at this point in time, what do you think of Ethereum's POS upgrade?
Vitalik:
I think the prices of BTC and ETH have nothing to do with POW and POS.
There are many different voices in the BTC and ETH communities. What these two communities do are completely different, and everyone thinks in completely different ways.
Regarding the price of ETH, I think there is a problem. ETH has many possible futures. (It is conceivable) that in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH .
This is a problem that many people in the community are worried about, but it is actually a very normal problem. For example, Google makes many products and does many interesting things, but more than 90% of their revenue is still related to their search business.
The relationship between Ethereum ecosystem applications and ETH (price) is similar. Some applications pay a lot of transaction fees and consume a lot of ETH. At the same time, there are many (applications) that may be relatively successful, but the success they bring to ETH is not as much as it should be.
So this is an issue we need to think about and continue to optimize. We need to support more applications that have long-term value to Ethereum Holders and ETH .
So I think the future success of ETH may appear in these areas. I don’t think it has much correlation with the improvement of consensus algorithm.
02 PBS Architecture and Centralization Concerns
Yan:
Yes, the prosperity of the ETH ecosystem is also an important reason that attracts our developers to build it.
OK, what do you think of the PBS (Proposer & Builder Separation) architecture of ETH2.0? This is a good direction. In the future, everyone can use a mobile phone as a light node to verify (ZK) proof, and then everyone can stake 1 ether to become a validator.
But the Builder may become more centralized, as it has to resist MEV and generate ZK Proof. If Based roll up is adopted, the Builder may have to do more, such as acting as a Sequencer.
In this case, will the Builder be too centralized? Although the Validator is decentralized enough, it is a chain. If there is a problem in one link in the middle, it will also affect the operation of the entire system. So how to solve this anti-censorship problem?
Vitalik:
Yes, I think this is a very important philosophical question.
In the early days of Bitcoin and Ethereum, there was an arguably subconscious assumption:
Building a block and validating a block is one operation.
Suppose you are building a block. If your block contains 100 transactions, then you need to run this much gas (100 transactions) on your own node. When you build the block and broadcast it to the world, every node in the world also needs to do this much work (consuming the same amount of gas). So if we set the gaslimit so that every Laptop or Macbook in the world, or a server of a certain size, can build blocks, then we need node servers with corresponding configurations to verify these blocks.
This is the previous technology. Now we have ZK, DAS, many new technologies, and Statelessness (stateless verification).
Before these technologies were used, building blocks and verifying blocks needed to be symmetrical, but now they can be asymmetrical. So the difficulty of building a block may become very high, but the difficulty of verifying a block may become very low.
Let’s take a stateless client as an example: if we use the stateless technology and increase the gaslimit tenfold, the computing power required to build a block will become huge , and an ordinary computer may not be able to do it. At this time, you may need to use a very high-performance MAC studio or a more powerful server.
But the cost of verification will be lower , because verification does not require any storage at all, and only relies on bandwidth and CPU computing resources. If ZK technology is added, the cost of CPU for verification can also be removed. If DAS is added, the cost of verification will be very, very low. If the cost of building a block becomes higher, the cost of verification will become very low.
So is this better than the current situation?
This question is rather complicated. I would think about it this way. If there are some super nodes in the Ethereum network, that is, some nodes with higher computing power, we need them to perform high-performance computing.
So how can we prevent them from doing evil? For example, there are several types of attacks.
First: Create a 51% attack.
Second: Censorship attack. If they don’t accept transactions from some users, how can we reduce this risk?
Third: For anti-MEV related operations , how can we reduce these risks?
In terms of 51% attack, since the verification process is done by Attester, Attester nodes need to verify DAS, ZK Proof and stateless clients. The cost of this verification will be very low, so the threshold for becoming a consensus node will still be relatively low.
For example, if there are some Super Nodes that build blocks, if such a situation occurs, 90% of these nodes are you, 5% are him, and 5% are others. If you do not accept any transactions at all, it is not a particularly bad thing. Why? Because you have no way to interfere with the entire consensus process.
So you can’t do a 51% attack, the only thing you can do is to dislike certain users’ transactions.
The user may only need to wait ten or twenty blocks for another person to include his transaction in the block. This is the first point.
The second point is that we have the concept of Fossil, so what does Fossil do?
Fossil is a way to separate the role of "transaction selection" from the role of execution. This way, the role of selecting which transactions to include in the next block can be made more decentralized. Therefore, through the Fossil method, small nodes will have the ability to independently select transactions to include in the next block. In addition, if you are a large node, you actually have very little power to do so [1].
This method is more complicated than before. Before, we thought that each node was a personal laptop. But if you look at Bitcoin, it is now a more hybrid architecture. Because the Bitcoin miners are all mining data centers.
So in POS, it is roughly done this way, that is, some nodes require more computing power and more resources. However, the rights of these nodes are limited, and other nodes can be made very scattered and decentralized, so they can ensure the security and decentralization of the network. But this method is more complicated, so this is also a challenge for us.
Yan:
Very good idea. Centralization is not necessarily a bad thing, as long as we can limit it from doing evil.
Vitalik:
right.
03 Issues between Layer 1 and Layer 2, and future directions
Yan:
Thanks for answering my confusion for many years. Now we come to the second part of the question. As a witness to the development of Ethereum, Layer2 is actually very successful. The TPS problem has been solved. It is not as congested as it was during the ICO (rush to issue transactions).
I personally think that L2 is pretty good now, but many people have proposed various solutions to the problem of L2 liquidity fragmentation . What do you think of the relationship between Layer1 and Layer2? Is the current Ethereum mainnet too Buddhist and too decentralized, and does it have no constraints on Layer2 ? Does Layer1 need to make rules with Layer2, or formulate some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution in Bankless, and I agree with it. What do you think? At the same time, I am also curious about when the corresponding solution will be launched if there is one?
Vitalik:
I think there are several problems with our Layer2 now.
The first is that their progress in security is not fast enough. So I have been pushing for Layer2 to upgrade to Stage 1, and I hope to upgrade to Stage 2 this year. I have been urging them to do so, and at the same time I have been supporting L2BEAT to do more transparency work in this regard.
The second is the issue of L2 interoperability, that is, cross-chain transactions and communications between two L2s. If the two L2s are in the same ecosystem, interoperability needs to be simpler, faster, and less costly than it is now.
We started this work last year, now called Open Intents Framework, and Chain-specific addresses, which are mostly UX work.
In fact , I think 80% of L2’s cross-chain issues are actually UX issues .
Although the process of solving UX problems may be painful, as long as the direction is right, complex problems can be made simple. This is also the direction we are working towards.
Some things require a further step, such as the Withdraw time of Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum, you need to wait a week to cross-chain that token to L1 or to another L2.
You can ask Market Makers to wait for a week (and you need to pay a certain fee to them accordingly). Ordinary users can use methods such as Open Intents Framework Across Protocol to cross from one L2 to another L2. This is OK for some small transactions. However, for some large transactions, the liquidity of Market Makers is still limited. So the transaction fees they need will be higher. I published an article last week [2] in which I support the 2 of 3 verification method, which is the OP + ZK + TEE method.
Because if you do 2 of 3, you can meet three requirements at the same time.
The first requirement is complete Trustless . There is no need for a Security Council. TEE technology plays an auxiliary role, so there is no need to fully trust it.
Second, we can start using ZK technology , but ZK technology is still in its early stages, so we cannot rely entirely on it yet.
Third, we can reduce the Withdraw time from one week to one hour.
You can imagine that if users use the Open Intents Framework, the liquidity cost of Market Makers will be reduced by 168 times. Because the time Market Makers need to wait (to do the Rebalance operation) will be reduced from 1 week to 1 hour. In the long run, we plan to reduce the Withdraw time from 1 hour to 12 seconds (the current block time), and if we use SSF, it can be reduced to 4 seconds.
Currently, we will also use zk-SNARK Aggregation to parallelize the ZK proof process and reduce latency. Of course, if users use ZK to do this, they don’t need to do it through Intents. But if they do it through Intents, the cost will be very low. This is all part of Interactability.
Regarding the role of L1, in the early stages of the L2 Roadmap, many people may think that we can completely copy Bitcoin’s Roadmap, and L1’s use will be very limited, only doing proofs (and other small amounts of work), and L2 can do everything else.
But we found that if L1 plays no role at all, it would be dangerous for ETH.
We have talked about this before, and one of our biggest concerns is that the success of Ethereum applications will not become the success of ETH.
If ETH is not successful, our community will have no money and no way to support the next round of applications. So if L1 does not play a role at all, the user experience and the entire architecture will be controlled by L2 and some applications. No one will represent ETH. So if we can assign more roles to L1 in some applications, it will be better for ETH .
The next question we need to answer is what will L1 do? What will L2 do?
I published an article in February [3], in the world of L2 Centric, there are many important things that need to be done by L1. For example, L2 needs to send proof to L1. If an L2 has a problem, the user will need to cross-chain to another L2 through L1. In addition, Key Store Wallet and Oracle Data can be placed on L1, etc. Many such mechanisms need to rely on L1.
There are also some high-value applications, such as Defi, which are actually more suitable for L1. An important reason why some Defi applications are more suitable for L1 is their time horizon (investment period). Users need to wait for a long time, such as one, two, or three years.
This is particularly evident in prediction markets, where questions are sometimes asked, such as what will happen in 2028?
There is a problem here. If there is a problem with the governance of an L2, then theoretically all the users there can exit, they can move to L1, or they can move to another L2. But if there is an application in this L2, and its assets are locked in a long-term smart contract, then the users will have no way to exit. So there are many theoretically safe Defis that are not very safe in reality.
For these reasons, some applications should still be done on L1, so we began to pay more attention to the scalability of L1.
We now have a roadmap, and by 2026, there will be about four or five ways to improve the scalability of L1.
The first is Delayed Execution (separate block verification and execution), which means that we can only verify the block in each slot and let it actually execute in the next slot. This has an advantage, and our maximum acceptable execution time may be increased from 200 milliseconds to 3 seconds or 6 seconds. This way, there is more processing time [4].
The second is the Block Level Access List , which means that each block will need to indicate in the information of this block which accounts’ status this block needs to read, as well as the related storage status. It can be said that it is a bit similar to Stateless without Witness. One advantage of this is that we can process the EVM operation and IO in parallel. This is a relatively simple implementation method for parallel processing.
The third is Multidimensional Gas Pricing [5] , which can set the maximum capacity of a block, which is very important for security.
Another one is (EIP4444) historical data processing, which does not require every node to permanently store all the information. For example, each node can only store 1%. We can use a p2p approach, for example, your node may store a part, and his node may store a part. In this way, we can store the information more dispersedly.
So if we can combine these four solutions together, we now believe that we may be able to increase L1's Gaslimit by 10 times . All our applications will have the opportunity to start relying more on L1 and do more things on L1, which will be good for L1 and for ETH as well.
Yan:
OK, next question, is it possible that we will see the Pectra upgrade this month?
Vitalik:
Actually, we hope to do two things, namely, the Pectra upgrade around the end of this month, and then the Fusaka upgrade in Q3 or Q4.
Yan:
Wow, so fast?
Vitalik:
I hope so.
Yan:
The next question I want to ask is also related to this. As someone who has watched Ethereum grow all the way, we know that in order to ensure security, Ethereum has about five or six clients (consensus clients and execution clients) being developed at the same time. There is a lot of coordination work in the middle, which leads to a relatively long development cycle.
This has pros and cons. Compared to other L1s, it may be slower, but it is also safer.
But what kind of solution is there so that we don’t have to wait for a year and a half to upgrade ? I have also seen that you have proposed some solutions. Can you introduce them in detail?
Vitalik:
Yes, there is a solution, we can improve the coordination efficiency. We are now starting to have more people moving between different teams to ensure more efficient communication between teams.
If a client team has a problem, they can speak up and let the researcher team know. In fact, the advantage of Thomas becoming one of our new EDs is this: he is in the client (team), and now he is also in the EF (team). He can do this coordination , which is the first point.
The second point is that we can be stricter with the client teams . Our current approach is, for example, if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now thinking that we can start the upgrade after only four teams are completed, so that we don’t have to wait for the slowest one and can also mobilize more enthusiasm.
04 How to view cryptography and AI
Yan:
So there should be some competition. It's good, and I really look forward to every upgrade, but don't make everyone wait too long.
Later I would like to ask more questions related to cryptography, which are more divergent.
When our community was first established in 2021, we gathered developers from major domestic exchanges and Venture researchers to discuss Defi. 2021 is indeed a stage where everyone is involved in understanding Defi, learning and designing Defi. It is a wave of participation by all people.
From the subsequent development, for ZK, whether it is the general public or developers, learning ZK, such as Groth16, Plonk, Halo2, etc., the later developers find it difficult to catch up, and the technology is advancing very quickly.
In addition, we can see that ZKVM is developing very fast, which makes ZKEVM not as popular as before. When ZKVM gradually matures, developers don’t need to pay too much attention to the underlying ZK.
What suggestions and opinions do you have on this?
Vitalik:
I think the best direction for some ZK ecosystems is that most ZK developers can know some high-level languages, namely HLL (High Level Language). Then they can write their application code in HLL, and those researchers of Proof System can continue to modify and optimize the underlying algorithm. Developers need to be layered, and they don't need to know what happens at the next layer.
There may be a problem now that the current ecosystem of Circom and Groth16 is very developed, but this has a relatively large limitation on ZK ecological applications. Because Groth16 has many shortcomings, such as the problem that each application needs to do its own Trusted Setup, and its efficiency is not very high, so we are also thinking that we need to put more resources here and help more modern HLLs succeed.
Another good option is the ZK RISC-V approach, because RISC-V can be turned into an HLL. Many applications, including EVM and other applications, can be written on RISC-V[6].
Yan:
OK, so it’s good that developers only need to learn Rust. I attended Devcon Bangkok last year and heard about the development of applied cryptography, which also made me see the light.
In terms of applied cryptography, what do you think about the combination of ZKP, MPC and FHE, and what suggestions can you give to developers?
Vitalik:
Yes, this is very interesting. I think FHE has a good future, but there is a concern. MPC and FHE always require a committee, which means they need to select seven or more nodes. If 51% or 33% of those nodes are attacked, your system will have problems. It is equivalent to saying that the system has a Security Council, which is actually more serious than the Security Council. Because, if an L2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked before problems will occur[7]. This is the first point.
The second point is the Security Council . If they are reliable, most of their assets will be thrown into cold wallets, that is, most of them will be offline. However, in most MPCs and FHE, their Committee needs to be online all the time in order for the system to run, so they may be deployed on a VPS or other servers. In this way, it will be easier to attack them.
This makes me a little worried. I think many applications can still be done. They have advantages, but they are not perfect.
Yan:
Finally, I would like to ask a relatively easy question. I see that you have also been paying attention to AI recently. I would like to list some of your views.
For example, Elon Mask said that humans may just be a boot program for silicon-based civilization.
Then there is a view in "Network Nation" that authoritarian countries may prefer AI, while democratic countries prefer blockchain.
Then from our experience in the cryptocurrency industry, the premise of decentralization is that everyone abides by the rules, checks and balances each other, and knows how to take risks, which will eventually lead to elite politics. So what do you think of these views? Just talk about your views.
Vitalik:
Yeah, I was wondering where to start answering this.
Because the field of AI is very complex, for example, five years ago, no one would have predicted that the United States would have the best Close Source AI in the world, and China would have the best Open Source AI. AI can improve the abilities of everyone, and sometimes it can also increase the power of some centralized (country) powers.
But AI can sometimes be said to have a more democratizing effect. When I use AI myself, I find that in those areas where I have already ranked among the top 1,000 in the world, such as some areas of ZK development, AI actually helps me less in the ZK part, and I still need to write most of the code myself. But in those areas where I am relatively new, AI can help me a lot. For example, for Android APP development, I have never done it before. I made an APP ten years ago, using a framework, written in Javascript, and then converted it into an APP. Apart from that, I have never written a Native Android APP before.
I did an experiment at the beginning of this year. I wanted to try to write an app using GPT. It was completed within an hour. It can be seen that the gap between experts and novices has been reduced a lot with the help of AI, and AI can also provide many new opportunities.
Yan:
One more thing to add, thank you for giving me a new perspective. I used to think that with AI, experienced programmers might learn faster, but it would be unfriendly to novice programmers. But in some ways, it does improve the abilities of novices. It might be a kind of equal rights, not differentiation, right?
Vitalik:
Yes, but now a very important issue that we need to think about is what effect the combination of some of the technologies we are developing, including blockchain, AI, cryptography, and some other technologies, will have on society.
Yan:
So you still hope that humans will not be ruled by just one elite, right? You also hope to achieve the Pareto optimality of the entire society. Ordinary people can become super individuals through the empowerment of AI and blockchain .
Vitalik:
Yes, super individuals, super communities, super humans .
05 Expectations for the Ethereum ecosystem and suggestions for developers
Yan:
OK, let's move on to the last question. What are your expectations and messages for the developer community? Is there anything you want to say to the developers in the Ethereum community?
Vitalik:
Developers of these Ethereum applications should think about this.
There are now many opportunities to develop applications in Ethereum, and many things that were not possible to do before can now be done.
There are many reasons for this, such as
First: Previously, the TPS of L1 was not enough, but now this problem is gone;
Second: There was no way to solve the privacy problem before, but now there is;
Third: Because of that AI, the difficulty of developing anything has become smaller. It can be said that although the complexity of the Ethereum ecosystem has become higher, through AI, everyone can still understand Ethereum better.
So I think many things that failed in the past, including ten or five years ago, may be successful now.
In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.
The first one is very open, decentralized, secure, and idealistic (application). But they only have 42 users. The second one is a casino. The problem is that these two extremes are both unhealthy.
So we hope to make some applications.
First of all, users will like to use it, which means it has real value.
Those applications will be better for the world.
The second is that there are some business models , for example in terms of economy, that can operate sustainably and do not need to rely on limited funds from foundations or other organizations. This is also a challenge.
But now I think everyone has more resources than before, so now if you can find a good idea and if you can do it well, your chances of success are very great.
Yan:
Looking back over the past few years, I think Ethereum is actually quite successful. It has always been leading the industry and working hard to solve the problems encountered by the industry under the premise of decentralization.
Another thing that I feel deeply is that our community has always been non-profit. Through the Grant of Gitcoin in the Ethereum ecosystem, the retroactive rewards of OP, and the airdrop rewards from other projects, we found that Build in the Ethereum community can get a lot of support . We are also thinking about how to make the community continue to operate stably.
The construction of Ethereum is really exciting, and we also hope to see the true realization of the world computer as soon as possible. Thank you for your valuable time.
Interview at Mount Davis, Hong Kong
April 07, 2025
Finally, here is a photo with Vitalik 📷
The references mentioned by Vitalik in the article are summarized as follows:
[1]: https://ethresear.ch/t/fork-choice-enforced-inclusion-lists-focil-a-simple-committee-based-inclusion-list-proposal/19870
[2]: https://ethereum-magicians.org/t/a-simple-l2-security-and-finalization-roadmap/23309
[3]: https://vitalik.eth.limo/general/2025/02/14/l1scaling.html
[4]: https://ethresear.ch/t/delayed-execution-and-skipped-transactions/21677
[5]: https://vitalik.eth.limo/general/2024/05/09/multidim.html
[6]: https://ethereum-magicians.org/t/long-term-l1-execution-layer-proposal-replace-the-evm-with-risc-v/23617
[7]:https://specs.optimism.io/protocol/stage-1.html?highlight=75#stage-1-rollup