
Vitalik: Maintain the minimalism of the chain, and don't overload Ethereum's consensus
TechFlow Selected TechFlow Selected

Vitalik: Maintain the minimalism of the chain, and don't overload Ethereum's consensus
Social consensus in the blockchain community is a fragile thing.
Compiled by: Web3 Age of Exploration
Ethereum's consensus mechanism is one of the most secure cryptoeconomic systems in existence. Validators controlling 18 million ETH (approximately $34 billion) confirm a block every 6.4 minutes, running multiple diverse protocol implementations for redundancy. If cryptoeconomic consensus fails—whether due to bugs or a deliberate 51% attack—a vast community of thousands of developers and countless users closely monitors the situation to ensure the chain recovers correctly. Once recovery occurs, protocol rules ensure attackers face severe penalties.
Over the years, many ideas—often at the thought experiment stage—have proposed leveraging Ethereum’s validator set, or even Ethereum’s social consensus, for additional purposes:
The ultimate oracle: A proposal where users vote on facts by staking ETH, using a SchellingCoin mechanism: each participant voting with the majority receives a proportional share of the ETH staked by those who voted with the minority.
The description continues: “So in principle this is a symmetric game. What breaks the symmetry is a) truth being a natural coordination point, and more importantly b) those betting on the truth can threaten to fork Ethereum if they lose.”
Re-staking: A suite of techniques used by many protocols (including EigenLayer), whereby Ethereum holders can use their stake simultaneously as deposits in another protocol. In some cases, if they violate the other protocol’s rules, their deposits may also be penalized. In others, there are no in-protocol incentives; stake is used only for voting.
L1-driven L2 project recovery: Proposed multiple times that if an L2 has a bug, the L1 could fork to recover it. A recent example is this design using an L1 soft fork to recover a failed L2.
The purpose of this article is to explain in detail why I believe some of these techniques pose high systemic risks to the ecosystem and should be prevented and resisted.
These proposals are often made with good intentions, so the goal is not to target individuals or projects but rather the technologies themselves. The general principle I aim to defend is this: While double-use of ETH staked by validators carries some risk, it is largely acceptable; however, attempts to “recruit” Ethereum’s social consensus to serve the goals of your application are undesirable.
Examples distinguishing validator reuse (low risk) from social consensus overload (high risk):
- Alice creates a web3 social network where you automatically get "verified" status if you can cryptographically prove control over an active Ethereum validator key. Low risk.
- Bob uses cryptographic proof that he controls ten active Ethereum validator keys to demonstrate sufficient wealth to meet certain legal requirements. Low risk.
- Charlie claims he has disproved the twin prime conjecture and knows the largest p such that both p and p+2 are prime. He changes his validator withdrawal address to a smart contract where anyone can submit a claimed counterexample q > p along with a SNARK proving that both q and q+2 are prime. If someone submits a valid claim, Bob’s validator is forcibly exited, and the submitter receives Bob’s remaining ETH. Low risk.
- Dogecoin decides to switch to proof-of-stake and, to increase the size of its security pool, allows Ethereum stakers to “double-stake” into its validator set. To do so, Ethereum stakers must change their withdrawal credentials to a smart contract where anyone can submit proof that they violated Dogecoin’s staking rules. If such proof is submitted, the staker’s validator is forcibly exited, and their remaining ETH is used to buy and burn DOGE. Low risk.
- eCash does the same as Dogecoin, but its project leaders further announce: if a majority of participating ETH validators collude to censor eCash transactions, they expect the Ethereum community to hard fork to remove those validators. They argue Ethereum has an interest in doing so because these validators have proven malicious and unreliable. High risk.
- Fred creates an ETH/USD price oracle that operates by allowing Ethereum validators to participate and vote, without incentives. Low risk.
George creates an ETH/USD price oracle that runs by allowing ETH holders to participate and vote. To prevent laziness and potential bribery, they add an incentive mechanism where participants whose answers fall within 1% of the median receive 1% of the ETH staked by those whose answers deviate by more than 1% from the median.
When asked, “What if someone credibly bribes all participants, everyone starts submitting wrong answers, and honest people lose 10 million ETH?”
George replies: Then Ethereum will have to strip away the funds of the bad participants. High risk.
George clearly avoids answering medium-high risk (because the project might create incentives to attempt such a fork, making a fork attempt possible even without formal encouragement).
George responds: “Then the attacker wins, and we’ll abandon using this oracle.” Medium-low risk (not entirely “low risk” because the mechanism does create a large group of actors who, during a 51% attack, might be incentivized to independently advocate a fork to protect their stakes).
Hermione creates a successful layer-2 and claims that because her L2 is the largest, it is inherently the most secure—since any bug causing fund theft would result in such massive losses that the community would have no choice but to fork to restore user funds. High risk.
If you're designing a protocol where, even if everything collapses completely, losses are confined to validators and users who chose to participate and use your protocol, that is low risk. On the other hand, if you intentionally invoke broader Ethereum ecosystem social consensus to resolve your issues via forks or reorganizations, that is high risk, and I believe we should strongly resist all attempts to create such expectations.
Gray-area cases start in the low-risk category but create incentives for participants to slide toward higher-risk behavior; SchellingCoin-style mechanisms, especially those imposing heavy penalties on deviation from the majority, are a prime example.
So what’s wrong with stretching Ethereum’s consensus?
Suppose it’s 2025, and a frustrated group decides to create a new ETH/USD price oracle that allows validators to vote hourly on the price. Validators who vote unconditionally receive a portion of system fees as rewards. But soon, participants become lazy: they connect to centralized APIs, which when attacked, either go offline or begin reporting incorrect values. To fix this, they introduce incentives: the oracle also votes on last week’s price, and if your real-time or retrospective vote differs by more than 1% from the median, you’re severely penalized, with penalty funds going to those who voted “correctly.”
Within a year, over 90% of validators participate. Someone asks: What if Lido teams up with several other large stakers to execute a 51% attack on the vote, forcing through a false ETH/USD price and heavily penalizing everyone not involved? By now, supporters of the oracle are deeply invested in the scheme and reply: If this ever happens, Ethereum will surely fork to kick out the bad actors.
Initially limited to ETH/USD, the system appears very stable. But over time, other indices are added: ETH/EUR, ETH/CNY, and eventually exchange rates for all G20 countries.
However, in 2034, things go wrong. Brazil experiences an unexpected and severe political crisis leading to contested elections. One party controls the capital and 75% of the territory, while another controls some northern regions. Major Western media consider the northern party the clear legitimate winner because its actions are lawful, whereas the southern party’s actions are illegal (and fascist). Yet official sources from India and China, along with Elon Musk, argue the southern party actually controls most of the land, and the international community shouldn’t act as world police but accept the outcome.
By then, Brazil already has a CBDC that splits into two forks: (northern) BRL-N and (southern) BRL-S. During oracle voting, 60% of Ethereum stakers report the ETH/BRL-S rate. Most community leaders and businesses condemn the stakers’ cowardly submission to fascism and propose a hard fork excluding validators providing ETH/BRL-S rates, slashing the balances of others nearly to zero. Within their social media bubbles, they expect a decisive victory. Yet once the fork occurs, the BRL-S side proves surprisingly strong. Their anticipated overwhelming win turns into an almost 50-50 split.
At this point, both sides exist in separate universes, each with its own chain, effectively unable to reunite. Ethereum, a global permissionless platform created partly to escape national and geopolitical influences, ends up divided due to an unexpected internal crisis in one G20 country.
This makes for a great sci-fi story, even a good movie. But what can we actually learn from it?
Blockchain “purity”—in the sense that it is a purely mathematical construct aiming to reach consensus only on purely mathematical facts—is a major advantage. Whenever a blockchain tries to “hook into” the external world, external conflicts begin affecting the chain. Given an extreme enough political event—and considering the above story closely mirrors actual events in major (>25 million population) countries over the past decade—even a currency oracle could tear the community apart.
Here are some plausible scenarios:
The currency tracked by the oracle (even possibly the dollar) undergoes hyperinflation, and markets collapse so severely that at certain points, no clear market price exists.
If Ethereum adds a price oracle for another cryptocurrency, controversial splits like the one described aren’t hypothetical—they’ve already happened, including in Bitcoin’s and Ethereum’s own histories.
Under strict capital controls, reporting which price between two currencies constitutes the legitimate market rate becomes a political issue.
But more importantly, I believe there is a Schelling fence: once blockchains begin treating real-world price indices as layer-1 protocol features, they easily succumb to incorporating increasing amounts of real-world information. Introducing a price index layer also expands the blockchain’s legal attack surface: it ceases to be merely a neutral technical platform and becomes more visibly a financial instrument.
Beyond price indices, what other risks exist?
Any expansion of Ethereum consensus “responsibilities” increases the cost, complexity, and risk of running validators. Validators are required to dedicate human effort to monitor and run additional software to ensure correct behavior under newly introduced protocols. Other communities gain the ability to externalize their dispute resolution needs onto the Ethereum community. Validators and the entire Ethereum community are forced to make more decisions, each carrying a risk of community split. Even without splits, the desire to avoid such pressure creates additional incentives to outsource decision-making to centralized entities via staking pools.
The possibility of splits also greatly strengthens problematic too-big-to-fail dynamics. With so many L2s and application-layer projects on Ethereum, it becomes impractical for Ethereum’s social consensus to fork to resolve all such issues. Thus, larger projects inevitably stand a greater chance of rescue. This in turn creates entrenched advantages: would you rather keep your coins on Arbitrum or Optimism, where if something goes wrong Ethereum might fork to save the day, or on Taiko, which is smaller (and non-Western, thus having fewer social connections within core developer circles), making L1-supported rescue less likely?
But bugs are a risk, and we need better oracles. So what should we do?
I believe the best solutions vary case by case, because the problems are fundamentally so different. Some solutions include:
Price oracles: Either non-fully-cryptoeconomic decentralized oracles, or validator-voting-based oracles that explicitly commit to emergency recovery strategies other than appealing to L1 consensus (or some combination). For instance, a price oracle could rely on a trust assumption that corrupting voting participants is slow, giving users advance warning of attacks and allowing them to exit oracle-dependent systems. Such oracles could deliberately delay rewards significantly, so participants don’t receive rewards if an instance stops being used (e.g., due to oracle failure and community forking to another version).
More complex truth oracles for subjective facts beyond prices: Some form of decentralized court system built on a non-fully-cryptoeconomic DAO.
Layer 2 protocols:
Short term: Rely on partially rolled-out solutions (what this post calls phase one)
Medium term: Rely on multiple proof systems. Trusted hardware (e.g., SGX) could be included here; I strongly oppose relying solely on SGX-like systems for security, but as part of a 2-out-of-3 system, they may be valuable.
Long term: Hope complex functionalities like “EVM verification” are eventually incorporated into the protocol.
Cross-chain bridges: Similar logic to oracles, but also minimize reliance on bridges: hold assets on their source chains and use atomic swap protocols to transfer value across different chains.
Using Ethereum’s validator set to secure other chains: One reason the (safer) Dogecoin approach above might be insufficient is that while it prevents 51% finality reversal attacks, it doesn’t prevent 51% censorship attacks. However, if you’re already relying on Ethereum’s validators, one possible direction is to stop trying to maintain an independent chain and instead become an effective validation system anchored to Ethereum. If a chain makes such a change, its protection against finality reversals becomes as strong as Ethereum’s, and it gains resistance to censorship attacks up to 99% (compared to the previous 49%).
Conclusion:
Social consensus in blockchain communities is a fragile thing. Because upgrades, bugs, and the ever-present possibility of 51% attacks necessitate it, social consensus is unavoidable—but precisely because it carries such a high risk of chain splits, we should use it cautiously, especially in mature communities. There is a natural urge to expand blockchain core functionality, since the core holds the greatest economic weight and the largest observer base, but each such expansion makes the core itself more vulnerable.
We should be wary of application-layer projects taking actions that might expand the “scope” of blockchain consensus, unless those actions involve enforcing core Ethereum protocol rules. It’s natural for application-layer projects to try such strategies, and such ideas often arise without awareness of the risks, but their consequences can easily diverge sharply from the community’s overall goals. Without limiting principles, this process could lead blockchain communities over time to accumulate ever more “responsibilities,” forcing an uncomfortable choice between annual high-risk splits and a de facto bureaucracy holding ultimate control over the chain.
We should maintain minimalism in the chain, support re-staking uses that don’t resemble slippery slopes, limit the role of Ethereum consensus, and help developers find alternative strategies to achieve their security goals.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














