Compiled by GaryMa Wu Talks about Blockchain

The Ethereum Foundation research team conducted the 13th AMA on the reddit forum on February 25, 2025. Community members can leave questions in the post and the research team members will answer them. Topics include EXECUTE precompilation, native Rollup, Blob fee model, DA value capture, block construction finality, L2 strategic reflection, Verge, VDF, encrypted memory pool, and academic funding. Wu said that the relevant questions/technical points involved in this AMA are summarized as follows:

Question 1: Native Rollup and EXECUTE precompilation

Question:

You may have seen Martin Köppelmann’s speech, in which he proposed the concept of “Native Rollups”, which is similar to our earlier concept of “Execution Shards”.

In addition, Justin Drake also proposed a "native Rollup" solution, suggesting that some functions of L2 be integrated into the consensus layer.

This is important to me because today's L2s don't deliver what I expect from Ethereum — for example, they have issues like admin backdoors. I also don't see them solving these problems in the future because they will become obsolete sooner or later if they can't be upgraded. How are these proposals progressing? Has the community reached consensus on these ideas, or is there a general consensus that Rollup should remain organizationally separate from Ethereum? Are there other related proposals?

Answer (Justin Drake — Ethereum Foundation):

To avoid confusion, I suggest calling Martin's proposal "Execution Sharding," a concept that has been around for nearly a decade. The main difference between execution sharding and native Rollup is flexibility. Execution sharding is a single chain with a preset template, such as a complete replica of the L1 EVM, which usually generates a fixed number of shards from top to bottom through a hard fork. Native Rollup is a customizable chain that supports flexible ordering, data availability, governance, bridging, and fee settings, and is generated from the bottom up in a permissionless manner through programmable precompilation. I think native Rollup is more in line with the programmable spirit of Ethereum.

We need to provide a path for the EVM-equivalent L2 to get rid of the security committee and maintain full L1 security and EVM equivalence when L1 hard forks. Execution sharding is difficult to meet the needs of existing L2 due to its lack of flexibility. Native Rollup may open up new design space by introducing EXECUTE-like precompilation (and possibly auxiliary DERIVE precompilation to support derivation functions).

About “Community Consensus”:

The discussion of native Rollup is still in its early stages. But I found that it is not difficult to promote this concept to developers of EVM equivalent Rollup. If a Rollup can choose to be "native", it is almost a free upgrade provided by L1, why not accept it? It is worth mentioning that the founders of top Rollups such as Arbitrum, Base, Namechain, Optimism, Scroll and Unichain have expressed interest at the 17th Sorting Conference and other occasions.

In comparison, I think promoting native Rollup is at least 10 times easier than promoting Based Rollup. Based Rollup is not a free upgrade at first glance — it will lose MEV revenue, and the 12-second block time may affect the user experience. But in fact, based on the incentive-compatible sorting and pre-confirmation mechanism, it can provide a better experience, but it takes more time to explain and digest.

Technically, EXECUTE precompilation sets a Gas limit and adopts a dynamic fee mechanism similar to EIP-1559 to prevent DoS attacks. For optimistic L2, this is not a problem because EXECUTE is only called when fraud is proven. For pessimistic Rollup, data availability (DA) may be a bigger bottleneck than execution because validators can easily verify SNARKs, and home network bandwidth is a fundamental limitation.

About the “current situation”:

Looking back at history, Vitalik proposed EXECTX precompilation in 2017, when the terms "native" or "Rollup" had not yet appeared. Although it was too early at the time, in 2025, under the "native Rollup" craze, the idea of adding EVM introspection has regained attention.

Regarding “Should Rollup be organizationally separated from Ethereum”:

An ideal final model is to treat native Rollup and Based Rollup as smart contracts on L1, but with lower fees. They can enjoy the network effect and security of L1 while being scalable.

For example, ENS is currently an L1 smart contract. In the future, I expect Namechain to become an application chain compatible with both native and Based, essentially a scalable L1 smart contract. It can retain organizational independence (such as token economics and governance) while being deeply integrated into the Ethereum ecosystem.

Inline questions:

Q: Execution sharding may be an advantage in the eyes of many people, but native L2 now seems to be a suboptimal choice, or the only option, without built-in execution sharding as an option.

Answer (Justin Drake):

EXECUTE precompilation is more flexible and powerful than execution sharding. In fact, it can simulate execution sharding, but not the other way around. If someone wants an exact copy of the L1 EVM, native Rollup also provides that option.

Q: The problem I’m looking to solve is the need for a neutral, trusted, Ethereum-branded Rollup, rather than outsourcing responsibility to a corporate-operated Rollup, which doesn’t seem to meet the need.

Answer (Justin Drake):

This can be achieved through EXECUTE precompilation. As a preliminary idea, the Ethereum Foundation can use it to deploy 128 "shards".

Q: You mentioned that native L2 is a customizable chain that can be generated from the bottom up through precompilation, which is more in line with the programmable spirit of Ethereum; you also mentioned the need to provide a path for the EVM equivalent L2 to get rid of the security committee. So, if the base layer does not implement functions such as sorting, bridging, and some kind of governance mechanism, can we really get rid of the security committee? Failure to keep up with EVM changes is just a way to become outdated. In execution sharding, we solve these problems through hard fork upgrades, benefiting from subjective governance (subjectivocracy). But if built on the upper layer, the base layer does not interfere with the upper-layer program. If a bug occurs, we will not risk forking to save the application layer. Did the team you contacted make it clear that if Ethereum launches EXECUTE, they will completely remove the security committee and achieve complete trustlessness?

Answer (Max Gillett):

The main reason for the existence of the Security Committee is that the fraud proof and validity proof systems are very complex, and even a single implementation error in a validator can be catastrophic. If these complex logics (at least in fraud proofs) are incorporated into L1 consensus, client diversity can reduce the risk, which is an important step to remove the Security Committee. I think that if the EXECUTE precompilation is designed properly, most of the "Rollup application logic" (such as bridging, messaging, etc.) can be made easily auditable and meet the standards of DeFi smart contracts - where contracts generally do not need a Security Committee.

Subjective governance is indeed a simple way to upgrade, but it is only practical when there is less competition between shards. Part of the meaning of programmable native Rollup is to allow the existing L2 to continue to experiment with factors such as ordering and governance, which are ultimately determined by the market. I expect there will be a range of native Rollups, from community-deployed versions with zero governance (trying to follow the L1 EVM) to versions with token governance and experimental precompilations.

Answer (Justin Drake):

Regarding “Does the team commit to being completely trustless?”

What I can confirm is:

1. Many L2 teams hope to achieve complete trustlessness.

2. EXECUTE Such a mechanism is necessary to achieve this goal.

3. For some applications (such as the minimal execution sharding that Martin wants), EXECUTE is sufficient to achieve complete trustlessness.

These three points are enough to push us onto the path of EXECUTE. Of course, for some specific L2s, EXECUTE may not be enough, which is why DERIVE precompilation was introduced in the early discussion.

Question 2: Optimizing the Blob Fee Model

Question:

Blob's fee model seems incomplete and too simple - the minimum fee is only 1 Wei (the smallest unit of ETH). Combined with the price mechanism of EIP-1559, if Blob capacity is greatly expanded, we may not see Blob fees increase for a long time. This is not ideal. We want to encourage Blob use, but we don't want the network to carry this data for free. Are there any plans to adjust Blob's fee model? If so, how will it be changed? What alternatives or adjustments are being considered?

Answer (Vitalik Buterin):

I think the protocol should be kept simple, avoid over-optimizing short-term situations, and unify the market logic for executing Gas and Blob Gas. EIP-7706 is one major direction (another direction is to add an independent Gas dimension for Calldata).

I support the introduction of a Super-Exponential Basefee Adjustment, which has been repeatedly proposed in different scenarios. If there are continuous super-capacity blocks, the fees will rise at a super-exponential rate and quickly reach a new equilibrium. With the right parameters, almost any gas price spike can be restored to stability within a few minutes.

Another independent idea is to simply increase the minimum Blob fee. This would reduce peak usage (good for network stability) and increase more consistent fee burning.

Answer (Ansgar Dietrichs — Ethereum Foundation):

Your concerns about the Blob fee model are valid, especially in the efficiency phase. Indeed, this and "L1 value accumulation" are big issues, but I want to focus on efficiency first.

We discussed this issue during the development of EIP-4844 and ultimately decided to set the minimum fee to 1 Wei as a "neutral value" for the initial implementation. Later observations showed that this did pose challenges to L2 during the transition from non-congested to congested. Max Resnick proposed a solution in EIP-7762, suggesting that the minimum fee be set close to zero during non-congested periods, but rise faster when demand increases.

This proposal was made late in the development of the Pectra fork, and implementing it could delay the fork. We discussed this in RollCall #9 (an L2 feedback forum) to see if the fork should be delayed. L2 feedback indicated that this is no longer an urgent issue, so we decided to maintain the status quo in Pectra. However, if the ecosystem needs it, future forks may adjust.

Answer (Barnabé Monnot — Ethereum Foundation):

Thanks for your question. Indeed, pre-EIP-4844 research (done by u/dcrapis) showed that the transition from 1 Wei to a reasonable market price could be problematic and disruptive in times of congestion, which we see every time Blobs get congested. Hence EIP-7762, which proposes to increase the minimum Blob base fee.

However, even if the base fee is 1 Wei, it does not mean that they are "riding for free" on the network. First, Blobs usually require priority fees to compensate block proposers. Second, to determine whether it is free, we have to see whether Blobs occupy resources that are not reasonably priced. Someone mentioned that the increased reorganization risk of Blobs (affecting activity) is not compensated, and I responded to this point on X).

I think the discussion should focus on compensating for activity risk. Some people tie Blob basefees to value accumulation, because basefees are burned (EIP-1559). If basefees are low and network value is low, should basefees be raised to collect more taxes from L2? I think this is short-sighted: first, the network has to define a "reasonable tax rate" (like fiscal policy); second, I believe that the growth of the Ethereum economy will bring more value. Unreasonably raising the cost of blobs (the raw materials for expanding the economy) is counterproductive.

Answer (Dankrad Feist — Ethereum Foundation):

I want to clarify that concerns about Blob fees being too low are overblown and somewhat short-sighted. The crypto space is likely to grow significantly over the next 2–3 years, and at this time, we should think less about fee extraction and more about long-term development.

Nevertheless, I think Ethereum's current pure congestion pricing resource model is not ideal, both in terms of price stability and ETH's long-term value accumulation. When Rollup stabilizes, a minimum price model that occasionally degenerates to congestion pricing will be better. In the short term, I also support a higher Blob minimum price, which would be a better choice.

Answer (Justin Drake — Ethereum Foundation):

Regarding "Are you planning a redesign?"

Yes, EIP-7762 proposes to increase the minimum base fee from 1 Wei to a higher value, such as 2²⁵ Wei.

Answer (Davide Crapis — Ethereum Foundation):

I support raising the minimum base fee, which I mentioned in my original 4844 analysis. However, there was some opposition from the core developers at the time. Now the consensus seems to be more that it would work. I think a minimum base fee (even if slightly lower) makes sense and is not short-sighted. Demand will increase in the future, but so will supply, and we may once again see the long-term minimum Blob fees we have seen over the past year.

More broadly, Blobs also consume network bandwidth and memory pool resources, which are not currently priced. We are investigating upgrades that may improve Blob pricing in this direction.

Inline questions:

Q: I want to emphasize that this is not an attempt to squeeze maximum value out of L2 because this is often dismissed as an argument whenever Blob pricing is questioned.

answer:

Thanks for the clarification, that's exactly right. The point is not to maximize extraction, but to design a fee mechanism that encourages adoption while pricing resources fairly and facilitating the development of a fee market.

Question 3: DA and L1/L2 value capture

Question:

L2 expansion has led to a significant reduction in the value accumulation of L1 (Ethereum mainnet), affecting the value of ETH. In addition to the saying that "Layer 2 will eventually burn more ETH and process more transactions", what specific plans do you have to solve this problem?

Answer (Justin Drake — Ethereum Foundation):

The revenue of blockchain (whether L1 or L2) mainly comes from two parts: congestion fees (ie "base fees") and competition fees (ie MEV, maximum extractable value).

Let’s talk about competition fees first. As applications and wallet designs advance, I think MEV will be increasingly captured by upstream (applications, wallets, or users), and will eventually be taken away almost entirely by entities close to the source of traffic, leaving downstream infrastructure (L1 and L2) with only a little scraps. In the long run, it may be futile for L1 and L2 to chase MEV.

Let’s talk about congestion fees. Historically, the bottleneck of L1 has been EVM execution, and the hardware requirements of consensus participants (such as disk I/O and state growth) have limited execution Gas. But after modern designs are extended with SNARKs or fraud proofs, execution resources will enter the "post-scarcity era" and the bottleneck will shift to data availability (DA). Because validators rely on limited home network bandwidth, DA is fundamentally scarce. Data availability sampling (DAS) can only provide linear expansion of about 100 times, unlike SNARKs or fraud proofs, which are almost unlimited.

So, we focus on DA economics, which I believe is the only sustainable source of income for L1. EIP-4844 (increasing DA supply through Blobs) has been implemented for less than a year. Blob demand has grown over time (mainly driven by induced demand), from an average of 1 Blob/block to 2 and 3. Now that supply is saturated and price discovery is just beginning, low-value "junk" transactions are being squeezed out by transactions with higher economic density.

If DA supply is stable for a few months, I expect hundreds of ETH to be burned through DA every day. But L1 is currently in "growth mode" and the upcoming Pectra hard fork (expected to be launched in a few months) will increase the target number of blobs from 3 to 6. This will overwhelm the blob fee market and demand will take months to catch up. In the next few years, as Danksharding is fully launched, DA supply and demand will play a cat-and-mouse game.

In the long run, I think DA demand will exceed supply. Supply is limited by home network bandwidth, and the throughput of about 100 home networks may not be able to meet global demand, especially since humans always find new ways to consume bandwidth. I expect Ethereum to stabilize at 10 million TPS (about 100 transactions per person per day) in the next 10 years, which can bring in $1 billion in revenue per day even if each transaction only charges $0.001.

Of course, DA income is only part of the ETH value accumulation. Issuance and currency premium are also critical. I recommend you to read my 2022 Devcon speech.

Inline questions:

Q: You said "If DA supply remains unchanged for a few months, hundreds of ETH will be burned through DA every day". Why do you predict this? The data from the past 4 months when the Blob target was saturated does not seem to support this growth and payment demand. How do you infer from this data that there will be a significant increase in "high payment demand" in a few months?

Answer (Justin Drake):

My rough model is that "real" economic transactions (such as users trading tokens) can afford small fees, such as $0.01 per transaction. My guess is that a lot of "junk" transactions (bot-generated) are being replaced by real transactions. Once real transaction demand exceeds DA supply, price discovery will begin.

Answer (Vitalik Buterin):

Many L2s are currently either using off-chain DA or postponing their launch, because if they use on-chain DA as planned, they will fill up the Blob space alone, causing fees to skyrocket. L1 transactions are daily decisions made by many small participants, while L2 Blob space is a long-term decision made by a few large participants and cannot be simply inferred from the daily market. I think even if Blob capacity increases significantly, there is still a high chance that there will be a huge demand that is willing to pay a reasonable fee.

Q: 10 million TPS? This seems unrealistic. Can you explain how it is possible?

Answer (Justin Drake):

I recommend watching my 2022 Devcon speech.

Simply put:

● L1 raw throughput: 10 TPS

● Rollups: 100x improvement

● Danksharding: 100 times faster

● Nelson’s Law (10 years): 100 times improvement

Q: I believe the supply side can do it, but what about the demand side?

Answer (Dankrad Feist — Ethereum Foundation):

All blockchains have the problem of value accumulation, and there is no perfect answer. If Visa charged a fixed fee per transaction, regardless of the amount, their revenue would be greatly reduced, but this is the current situation of blockchain. The execution layer is slightly better than the data layer, and can extract priority fees that reflect urgency, while the data layer only has a fixed fee.

My advice is to add value first. Without value to create, there is no accumulation. To do this, we should maximize the Ethereum data layer to make alternative DAs unnecessary; expand L1 so that high-value applications can run on L1; and encourage projects like EigenLayer to expand the use of ETH as (non-financial) collateral. (Pure financial collateral is more difficult to expand and may exacerbate the risk of a death spiral.)

Q: Isn't it contradictory to "encourage EigenLayer" and "make alternative DA unnecessary"? If DA is the only sustainable source of income, isn't supporting EigenLayer a risk of letting EIGEN stakers take away potential 10 million TPS or $1 billion in daily income? As an independent validator and EigenLayer operator, I feel like introducing a Trojan horse, which is very contradictory.

Answer (Dankrad Feist):

I think of EigenLayer as more of a decentralized insurance product (of which EigenDA is just one) collateralized by ETH. I hope Ethereum DA expands to the point where EigenDA is not necessary for financial use cases.

Justin thinks DA is the main source of income for Ethereum, which may be wrong. Ethereum already has something more valuable - a highly liquid execution layer, of which DA is only a small part (but useful for white label Ethereum and high-scalability applications). DA has a moat, but its price is much lower than the execution layer, so more expansion is needed.

Answer (Justin Drake):

Haha, Dankrad and I have been arguing about this for the past few years. I think the execution layer is not defensible, MEV will be captured by applications, and SNARKs will make execution no longer a bottleneck. Time will tell.

Answer (Dankrad Feist):

SNARKs have no impact on this. Synchronous state access is the root of the value and limitation of the execution layer. What a core can execute has nothing to do with SNARKs. I don’t think DA has no value accumulation, but the ability of the execution layer and DA to charge per transaction may be 2-3 orders of magnitude different. The DA that can charge a high price may be the DA that combines sorting, not the general DA.

Answer (Justin Drake):

You believe that "competition" (state access restrictions or ordering constraints) has value. I agree that it has value, but I don't think it will pay off in the long run for L1 or L2. Applications, wallets, and users close to the source of traffic will recapture the value of competition.

L1 DA is irreplaceable for applications that require top-level security and composability. EigenDA is the "best fit" alternative DA, often used as an "overflow" choice for high-volume, low-value applications (like games).

Question 4: Blockchain construction finality

Question:

How will Ethereum's final block construction work? The trusted gateway model proposed by Justin looks like a centralized sorter, which may not be compatible with the APS ePBS (improved proposer-builder separation) we expect. The current FOCIL (forced inclusion list) design is not suitable for transactions carrying MEV, so block construction seems to be more inclined to non-financial applications of L1, which may drive applications to choose to run on the fast centralized sorter L2.

Going a step further, can we design an efficient sorting system that does not maximize MEV extraction on L1? Do all efficient and low-extraction transactions require a principal agent (like a centralized sorter or pre-confirmation/gateway)? Is multi-proposer coordination (MCP) like BRAID still being explored?

Answer (Justin Drake — Ethereum Foundation):

I don't quite understand what you mean. Let me clarify a few points:

1. APS (Advance Proposer Commitment) and ePBS (Enhanced Proposer-Builder Separation) are different design areas, and this may be the first time I see the combination of "APS ePBS".

2. I understand that the gateway is similar to a "pre-confirmation relay". If ePBS eliminates the middleman role of the relay, APS also eliminates the need for a gateway. Under APS, the L1 execution proposer (if professional enough) can directly provide pre-confirmation without delegating to the gateway.

3. Saying "gateways are incompatible with APS" is like saying "relays are incompatible with ePBS" — the whole point of the design was to remove the middleman! Gateways are just a temporary complication until APS arrives.

4. Before APS, I didn’t understand why gateways were compared to centralized sorting. Centralized sorting is permissioned, while the gateway market (and the set of L1 proposers that are delegated to the gateway) is permissionless. Do you say that because there is only a single gateway sorter per time slot? Then by this logic, L1 is also a centralized sorter because there is only a single proposer per time slot. The core of decentralized sorting is to rotate short-term sorters from the permissionless set.

I think MCP (Multi-Proposer Coordination) is a suboptimal design for several reasons: it introduces centralized multi-block games, complicates fee processing, and requires complex infrastructure (such as VDF, delayed verification function) to prevent last-minute bidding.

If MCP is as good as Max Resnick said, we will see the results on Solana soon. Max is now working full-time on Solana, Anatoly also supports MCP to reduce latency, and Solana is iterating very quickly™. By the way, I am also happy to see L2 experiment with MCP without permission. But when Max was in charge of MetaMask at Consensys, he failed to convince the internal L2 Linea to switch to MCP.

Answer (Barnabé Monnot — Ethereum Foundation):

I want to offer an alternative vision of the endgame. My initial roadmap is as follows, which is already quite challenging:

● Deploy FOCIL to ensure censorship resistance and begin to decouple expansion restrictions from local block construction restrictions.

● Deploy SSF (Single Slot Finality) as soon as possible and keep the slot time as short as possible. This requires deploying Orbit and ensuring that the validator scale is consistent with the SSF and slot targets.

At the same time, I believe that application layer improvements (such as BuilderNet, various Rollups, and L1-based Rollups) can ensure blockchain construction innovation and support new applications.

In the meantime, we should seriously consider different architectures for L1 block construction, including BRAID. The final outcome may never be determined? Who knows. But after FOCIL and SSF/shorter slot deployment, the next steps will be more reasonable.

Question 5: Do you regret focusing on L2?

Question:

Given the community sentiment, do you still believe that focusing on L2 was the right choice? If you could go back in time, what would you change?

Answer (Ansgar Dietrichs — Ethereum Foundation):

My view is that Ethereum’s strategy has always been to pursue principled architectural solutions. In the long run, Rollup is the only principled solution needed to scale blockchain to the base layer of the global economy. Monolithic chains require “every participant verifies everything”, while Rollup greatly reduces the verification burden through “execution compression”. Only the latter can scale to billions of users (and potentially even AI agents).

Looking back, I feel like we didn’t pay enough attention to the path to the end goal and the intermediate user experience. Even in a Rollup-dominated world, L1 still needs to scale significantly, as Vitalik recently mentioned. We should have realized that continuing to scale L1 while pushing L2 would bring more value to users during the transition period.

I think Ethereum has been somewhat complacent due to a lack of real competition for a long time. Now more intense competition has exposed these misjudgments and is also pushing us to deliver better "products", not just theoretically correct solutions.

But to reiterate, Rollup is critical to achieving the “scaling end game.” The specific architecture is still evolving — Justin’s exploration of native Rollup, for example, shows that the approach is still being adjusted — but the general direction is clearly correct.

Answer (Dankrad Feist — Ethereum Foundation):

I disagree on some points. If Rollup is defined as “extended DA and execution validation”, how is it different from execution sharding?

In fact, we think of Rollup more as "white label Ethereum". To be fair, this has freed up a lot of energy and funds. If we only focused on implementing sharding in 2020, we would not have the current progress in zkEVM and interoperability research today.

Technically, we can now achieve any goal — highly scalable L1, extremely scalable shard chains, or a Rollup base layer. The best thing for Ethereum is a combination of the first and third.

Question 6: ETH economic security risk

Question:

If the USD price of ETH falls below a certain level, will it threaten the economic security of Ethereum?

Answer (Justin Drake — Ethereum Foundation):

If we want Ethereum to be resilient to attacks — including those from nation-states — then high economic security is essential. Currently, Ethereum has ~$80 billion in slashable economic security (~$2,385 per ETH based on 33,644,183 ETH staked), the highest of any blockchain. In comparison, Bitcoin has ~$10 billion in (non-slashable) economic security.

Question 7: Mainnet expansion and fee reduction plan

Question:

What plans does the Ethereum Foundation have to improve mainnet scalability and reduce transaction fees in the next few years?

Answer (Vitalik Buterin):

1. Expand L2: Add more blobs, such as PeerDAS in Fusaka, to further increase data capacity.

2. Optimize interoperability and user experience: Improve cross-L2 interactions, such as the recent Open Intents Framework.

3. Moderately increase the L1 Gas limit.

Question 8: Future application scenarios and L1/L2 collaboration

Question:

What applications and use cases have you designed for Ethereum in the following time periods:

● Short-term (<1 year)

● Medium term (1–3 years)

● Long term (4+ years)

How do the activities of L1 and L2 work together during these time periods?

Answer (Ansgar Dietrichs — Ethereum Foundation):

This is a broad question, and I offer some insights to focus on the overall trend:

● Short term (<1 year): Focus on stablecoins, which have become pioneers in real-world applications due to their fewer regulatory restrictions. Small-scale cases such as Polymarket are also beginning to show their influence.

● Medium term (1–3 years): Expand to more real-world assets (such as stocks and bonds), leverage DeFi modules for seamless interoperability, and innovate in business process chain-linking, governance, prediction markets, etc.

● Long term (4+ years): Realize “Real World Ethereum” (DC Posch vision), build real products for billions of users and AI agents, with crypto as an enabler rather than a selling point.

● L1/L2 relationship: The original vision of “L1 is only for settlement and rebalancing” needs to be updated. L1 expansion continues to be important, and L2 is still the main force of expansion. The relationship will evolve further in the coming months.

Answer (Carl Beekhuizen — Ethereum Foundation):

We focus on scaling the entire technology stack rather than designing for specific applications. Ethereum’s strength is being neutral about what runs in the EVM, providing the best platform for developers. The core theme is scaling: how to build the most powerful system while maintaining decentralization and censorship resistance.

● Short term (<1 year): The focus is on launching PeerDAS and significantly increasing the number of blobs in a block; at the same time, improving EVM, such as launching EOF (EVM object format) as soon as possible. Research is also ongoing, including statelessness, Gas repricing, EVM zero-knowledge, etc.

● Medium term (1–3 years): Further expand Blob throughput and launch early research projects such as ethproofs.org’s zkEVM initiative.

● Long term (4+ years): Add massive extensions to the EVM (L2 will also benefit), significantly increase blob throughput, improve censorship resistance through measures such as FOCIL, and increase speed with zero-knowledge technology.

Question 9: Verge selection and hash function

Question:

Vitalik mentioned in his recent post about Verge that we will soon be faced with three choices: (i) Verkle trees, (ii) STARK-friendly hash functions, (iii) conservative hash functions. Have you decided which path to take?

Answer (Vitalik Buterin):

This is still under intense discussion. My personal feeling is that the atmosphere has been leaning slightly towards (ii) in the past few months, but it has not been finalized yet.

I think these options should be considered in the context of the overall roadmap. Realistic options might be:

● Option A:

1. 2025: Pectra, possibly with EOF

● 2026: Verkle Tree

● 2027: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● Option B:

● 2025: Pectra, possibly with EOF

● 2026: L1 execution optimization (delayed execution, multi-dimensional gas, repricing)

● 2027: Initial launch of Poseidon (initially only a small number of clients are encouraged to be stateless to reduce risks)

● 2028: Gradually increasing stateless clients

Option B is also compatible with conservative hash functions, but I still prefer to roll it out gradually. Even if the hash function is less risky than Poseidon, the proof system is still risky in the early stages.

Answer (Justin Drake — Ethereum Foundation):

As Vitalik said, the near-term options are still under discussion. But from a long-term fundamental perspective, (ii) is clearly the direction, because (i) has no post-quantum security, and (iii) is less efficient.

Question 10: VDF Progress

Question:

What is the latest progress on VDF (delayed verification function)? I remember there was a paper in 2024 that pointed out some basic problems.

Answer (Dmitry Khovratovich — Ethereum Foundation):

We currently lack ideal VDF candidates. This may change as new models (for analysis) and new constructions (heuristic or non-heuristic) are developed. But at the current state of the art, we cannot confidently say that any solution cannot be sped up by, say, 5x. So the consensus is to put VDF on hold for now.

Question 11: Block time and finality time adjustment

Question:

From a developer perspective, is it more likely that the block time will be gradually shortened, or the finality time will be reduced, or both will remain unchanged until single slot finality (SSF) is achieved?

Answer (Barnabé Monnot — Ethereum Foundation):

I’m not sure there’s a middle path between the current and SSF to reduce finality time. I think launching SSF is the best chance to reduce both finality delay and slot time. We can adapt based on the existing protocol, but if SSF can be implemented in the short term, it may not be worth the effort on the current protocol.

Answer (Francesco D'Amato — Ethereum Foundation):

Before SSF, we can certainly reduce block times (e.g. to 6–9 seconds), but it would be best to first confirm whether this is compatible with SSF and other roadmap content (such as ePBS). At present, I understand that SSF should be compatible, but this does not mean that we should do it right away, and the SSF design is not yet fully determined.

Question 12: FOCIL and encrypted memory pool

Question:

Why not skip FOCIL (Forced Include List) and just use encrypted memory pool?

Answer (Justin Drake — Ethereum Foundation):

Unfortunately, encrypted memory pools are not enough to ensure mandatory inclusion. This is already seen on the TEE-based BuilderNet running on mainnet. For example, Flashbots will review OFAC transactions from their BuilderNet blocks. TEEs (which have access to unencrypted transaction content) can be easily filtered. More advanced memory pools based on MPC (multi-party computation) or FHE (fully homomorphic encryption) have similar problems, and the sorter can require zero-knowledge proofs to exclude transactions that they do not want to include.

More broadly, encrypted memory pools and FOCIL are orthogonal and complementary. Encrypted memory pools focus on privacy inclusion, while FOCIL focuses on mandatory inclusion. They also operate at different layers of the technology stack: FOCIL is L1 built-in infrastructure, while encrypted memory pools are off-chain or application layer.

Answer (Julian Ma — Ethereum Foundation):

While both FOCIL and encrypted mempools aim to improve censorship resistance, they are not exact substitutes, but rather complementary. So FOCIL is not a transition to encrypted mempools. The main reason there are no encrypted mempools right now is the lack of a satisfactory proposal, although efforts are underway. If deployed now, it would impose honest assumptions on Ethereum liveness.

FOCIL should be deployed because it has a robust proposal, the community has confidence in it, and the implementation is relatively lightweight. When combined, encrypted transactions in FOCIL can limit the economic damage to users from reordering.

Issue 13: Gas and Blob Limit Voting

Question:

Will you make the number of blobs a staker-voted number like the gas limit? Big players could collude to raise limits, squeezing out small home stakers with insufficient hardware or bandwidth, leading to centralization of stakes and undermining decentralization. Also, if these increases are uncapped, will it be harder to object via a hard fork? What's the point of setting hardware bandwidth requirements if they are voted on? Is it appropriate to vote when stakers' interests may not be aligned with the network as a whole?

Answer (Vitalik Buterin):

I personally think it would be a good idea to (i) have Blobs voted on by stakers like Gas limits, and (ii) have clients coordinate updating default Gas voting parameters more frequently. This is functionally equivalent to a "Blob Parameters Only (BPO) Fork", but more robust. If clients are not upgraded in time or implement incorrectly, it will not cause consensus failure. Many BPO fork supporters are actually referring to this idea.

Issue 14: Fusaka and Glamsterdam Upgrade Features

Question:

What features should the Fusaka and Glamsterdam upgrades include to significantly advance the roadmap?

Answer (Francesco D'Amato — Ethereum Foundation):

As mentioned, Fusaka will significantly improve Data Availability (DA). I expect Glamsterdam to make a similar leap in the Execution Layer (EL), which has the most room for improvement (and has more than a year to determine the direction). The current repricing effort could lead to major changes in Glamsterdam, but it is not the only option.

Additionally, FOCIL can be seen as a scalability EIP that better separates local block construction and validator needs, which combined with its goals of censorship resistance and reduced reliance on altruistic behavior will push Ethereum forward. These are my current priorities, but by no means all.

Answer (Barnabé Monnot — Ethereum Foundation):

Fusaka focuses on PeerDAS, which is critical for L2 scaling, and few people want other features to delay it. I hope Glamsterdam includes FOCIL and Orbit, paving the way for SSF.

The above is biased towards the consensus layer (CL) and DA, but Glamsterdam should also have an execution layer (EL) effort that significantly advances L1 scaling. Discussions on the specific feature set are ongoing.

Problem 15: Enforce L2 decentralization

Question:

Given the slow progress of L2 decentralization, can EIP "force" L2 to adopt Stage 1 or Stage 2 decentralization?

Answer (Vitalik Buterin):

Native Rollups (such as EXECUTE precompilation) achieve this to some extent. L2s are still free to ignore it and code in their own backdoors, but they can use the simple, high-security proof system built into L1. L2s pursuing EVM compatibility are likely to choose this option.

Question 16: The biggest risk to Ethereum’s survival

Question:

What is the biggest existential risk facing Ethereum?

Answer (Vitalik Buterin):

Superintelligent AI could lead to a single entity controlling most of the world’s resources and power, rendering blockchain irrelevant.

Question 17: Impact of Alt-DA on ETH holders

Question:

Is Alt-DA (DA not on the ETH mainnet) a bug or a feature for ETH holders in the short, medium and long term?

Answer (Vitalik Buterin):

I still stubbornly hope that there will be a dedicated R&D team working on an ideal Plasma-like design that allows chains that rely on Ethereum L1 to still provide users with stronger (albeit imperfect) security when using alternative DAs. There are many overlooked opportunities here that can increase user security and value to DA teams.

Question 18: Future prospects of hardware wallets

Question:

What is your vision for the future of hardware wallets?

Answer (Justin Drake — Ethereum Foundation):

In the future, most hardware wallets will be based on the phone's quarantine rather than a separate device like the Ledger USB. Account abstraction has made infrastructure like Passkeys available. I expect to see native integration (such as in Apple Pay) this decade.

Answer (Vitalik Buterin):

Hardware wallets need to be "truly secure" in several ways:

1. Secure hardware: Based on open source, verifiable stacks (such as [IRIS](https://media.ccc.de/v/38c3-iris-non-destructive-inspection-of-silicon)), reducing the risk of backdoor and side-channel attacks.

2. Interface security: Provide sufficient transaction information to prevent computers from tricking users into signing unexpected content.

3. Ubiquity: The ideal is to create a device that doubles as a crypto wallet and other security purposes, encouraging more people to acquire and use it.

Question 19: L1 Gas Limit Target for 2025

Question:

What is the Gas limit target for L1 in 2025?

Answer (Toni Wahrstätter — Ethereum Foundation):

Opinions vary on Gas limits, but the core question is: should we scale L1 by increasing Gas limits, or focus on L2 and increase Blobs using technologies like DAS?

Vitalik’s recent blog discusses the rationale for modest scaling of L1. But raising the gas limit has tradeoffs:

● Higher hardware requirements

● The growth of status and historical data increases the burden on nodes

● Greater bandwidth requirements

On the other hand, the Rollup-centric vision aims to improve scalability without increasing node requirements. PeerDAS (short term) and Full DAS (medium to long term) will unlock significant potential while keeping resources manageable.

I would not be surprised if after the Pectra hard fork (in April), validators push the gas limit to 60 million. But in the long run, the focus of expansion may be on the DAS solution rather than simply increasing the gas limit.

Question 20: Beam Client Transition

Question:

If the Ethereum Beam client experiment (or its renamed version) is successful and there are several available implementations in 2–3 years, will there need to be a phase where the current PoS and Beam PoS run in parallel and both receive staking rewards, just like during the transition from PoW to PoS?

Answer (Vitalik Buterin):

I think it's an immediate upgrade.

The reason for using double chains when merging is:

● PoS has not been fully tested, and it will take time for the ecosystem to run and ensure the safety of the switch.

● PoW can be reorganized and the switching mechanism needs to be robust.

PoS has finality, and most of the infrastructure (such as staking) can be carried over. We can change the validation rules from the beacon chain to the new design through a hard fork. Economic finality may be temporarily insufficient at the transition point, but this is a small acceptable price.

Answer (Justin Drake — Ethereum Foundation):

I assume that the upgrade from the beacon chain to Beam will be handled like a normal fork, without the need for a “merge 2.0”. A few thoughts:

1. The consensus participants (ETH stakers) are the same on both sides of the fork, unlike a merge where the group is changed and there is a risk of miner interference.

2. The “clocks” on both sides of the fork are consistent, unlike the transition from probabilistic time slots to fixed time slots from PoW to PoS.

3. Infrastructure such as libp2p, SSZ, and anti-slashing databases are mature and can be reused.

4. There is no need to rush to disable PoW to avoid additional issuance this time. You can take the time to do due diligence and quality assurance (run multiple test networks) to ensure a smooth mainnet fork.

Question 21: Academic funding plan for 2025

Question:

The Ethereum Foundation has launched a $2 million academic funding program through 2025. What research areas are prioritized? How will the results be integrated into the Ethereum roadmap?

Answer (Fredrik Svantes — Ethereum Foundation):

Protocol security team is interested in:

● P2P security: Many vulnerabilities are related to network layer DoS attacks (such as libp2p or devp2p), and improvements in this area are valuable.

● Fuzz testing: The EVM and consensus layer clients have been tested, but areas such as the network layer can be explored in depth.

● Supply Chain Risk: Understand Ethereum’s current dependency risks.

● LLM applications: How large language models can improve protocol security (such as auditing code and automated fuzz testing).

Answer (Alexander Hicks — Ethereum Foundation):

In terms of integration, we continue to do so by contacting academia, funding research, and participating in it. The Ethereum system is unique, and academic research does not always have a direct impact on the roadmap (for example, the consensus protocol is unique and academic results are difficult to directly transform), but it is very obvious in areas such as zero-knowledge proofs.

The academic grant program is part of our internal and external research, and this time we are exploring interesting content that may not directly affect the roadmap. For example, I added formal verification and AI-related topics. The practicality of AI in Ethereum tasks is still to be verified, but I want to promote progress in the next one or two years. This is a good opportunity to evaluate the current situation and improve methods, and it can also attract cross-disciplinary researchers who don’t know much about Ethereum but are interested.