
Rejecting AI Power Monopolies: Vitalik and Beff Jezos Engage in a Heated Debate—Can Decentralized Technology Serve as Humanity’s “Digital Firewall”?
TechFlow Selected TechFlow Selected

Rejecting AI Power Monopolies: Vitalik and Beff Jezos Engage in a Heated Debate—Can Decentralized Technology Serve as Humanity’s “Digital Firewall”?
What might human society look like in the next 10, 100, or even 1,000 years?
Compiled & Translated by TechFlow

Guests: Vitalik Buterin, Ethereum Founder; Guillaume Verdon (a.k.a. “Beff Jezos”), Founder & CEO of Extropi
Hosts: Eddy Lazzarin, CTO of a16z crypto; Shaw Walters, Founder of Eliza Labs
Podcast Source: a16z crypto
Original Title: Vitalik Buterin vs Beff Jezos: AI Acceleration Debate (E/acc vs D/acc)
Air Date: March 26, 2026

Key Takeaways
Should we push AI development as fast as possible—or proceed with greater caution?
Currently, the debate around AI development centers on two opposing views:
- E/acc (effective accelerationism): Advocates accelerating technological progress as quickly as possible, arguing that acceleration is humanity’s only path forward.
- D/acc (defensive / decentralized acceleration): Supports acceleration but emphasizes proceeding carefully—otherwise, we risk losing control over technology.
In this episode of the a16z crypto show, Ethereum founder Vitalik Buterin and Extropi founder & CEO Guillaume Verdon (a.k.a. “Beff Jezos”) join a16z crypto CTO Eddy Lazzarin and Eliza Labs founder Shaw Walters for a deep discussion on these perspectives. They explore the potential implications of these ideas for AI, blockchain technology, and humanity’s future.
They discuss several key questions:
- Can we control the pace of technological acceleration?
- What are AI’s greatest risks—from mass surveillance to extreme centralization of power?
- Can open-source and decentralized technologies determine who benefits from technology?
- Is slowing down AI development realistic—or even advisable?
- How can humans retain value and agency in a world increasingly dominated by ever-more-powerful systems?
- What might human society look like in 10, 100, or even 1,000 years?
At its core, this episode asks: Can technological acceleration be guided—or has it already escaped our control?
Highlights of Key Insights
On the nature and historical view of “accelerationism”
- Vitalik Buterin: “Something new happened over the past century: we now must understand a rapidly changing world—and sometimes, a rapidly destructive one. … World War II gave rise to reflections like ‘I am become Death, the destroyer of worlds,’ prompting people to ask: when old beliefs collapse, what can we still trust?”
- Guillaume Verdon: “E/acc is essentially a ‘meta-cultural prescription.’ It isn’t itself a culture—but tells us *what* to accelerate. At its core lies material complexification, because that allows us to better predict our surroundings.”
- Guillaume Verdon: “The opposite of anxiety is curiosity. Rather than fearing the unknown, embrace it. … We should paint the future with optimism—because our beliefs shape reality.”
On entropy, thermodynamics, and “selfish bits”
- Vitalik Buterin: “Entropy is subjective—it’s not a fixed physical statistic, but reflects how much we don’t know about a system. … When entropy increases, it’s our ignorance of the world that grows. … Value arises from our own choices. Why do we find a vibrant human world more interesting than Jupiter—a ball of particles? Because *we* assign meaning.”
- Vitalik Buterin: “Suppose you have a large language model and arbitrarily change one weight to an enormous number—say, 9 billion. The worst outcome is total system failure. … If we accelerate blindly and indiscriminately, we may lose all value.”
- Guillaume Verdon: “Every piece of information ‘fights’ for its existence. To persist, each bit must leave more indelible traces of itself in the universe—like making a deeper ‘dent’ in it.”
- Guillaume Verdon: “That’s precisely why the Kardashev Scale is considered the ultimate metric for civilizational advancement. … This ‘Selfish Bit Principle’ means only bits that promote growth and acceleration will hold a place in future systems.”
On D/acc’s defensive path and power risks
- Vitalik Buterin: “D/acc’s core idea is that technological acceleration is critically important for humanity. … But I see two categories of risk: multipolar risk (e.g., anyone can easily acquire nuclear weapons) and unipolar risk (e.g., AI leading to an inescapable, permanent dictatorship).”
- Guillaume Verdon: “We worry that the concept of ‘AI safety’ could be weaponized. Power-seeking institutions may use it as a tool to consolidate AI control—and persuade the public that ordinary people shouldn’t be allowed to use AI, ‘for your own safety.’”
On open-source defense, hardware, and “intelligence densification”
- Vitalik Buterin: “Under D/acc, we support ‘open-source defensive technologies.’ One company we’ve invested in is building a fully open-source endpoint device capable of passively detecting viral particles in the air. … I’d love to send you a CAT device as a gift.”
- Vitalik Buterin: “In my vision of the future, we need verifiable hardware. Every camera should be able to publicly prove its specific purpose. We can use cryptographic signatures to ensure devices serve only legitimate public-safety goals—not mass surveillance.”
- Guillaume Verdon: “The only way to achieve power symmetry between individuals and centralized institutions is through ‘intelligence densification.’ We need far more energy-efficient hardware so individuals can run powerful models on simple devices—like Openclaw + Mac mini.”
On AGI delay and geopolitical competition
- Vitalik Buterin: “Delaying AGI from 4 to 8 years would be a safer choice. … The most feasible and least dystopian approach is ‘restricting available hardware.’ Chip production is highly concentrated—Taiwan alone produces over 70% of the world’s chips.”
- Guillaume Verdon: “If you restrict NVIDIA chip production, Huawei may rapidly fill the gap—and overtake. … Either accelerate—or perish. If you fear silicon-based intelligence outpacing us, support accelerated biotech development to surpass it.”
- Vitalik Buterin: “Delaying AGI by four years could yield value hundreds of times greater than rewinding to 1960. Those four years buy deeper alignment understanding and reduce the risk of any single entity gaining >51% control. … Ending aging saves ~60 million lives annually—but delay significantly reduces existential risk.”
On autonomous agents, Web 4.0, and artificial life
- Vitalik Buterin: “I’m more excited about ‘AI-assisted Photoshop’ than ‘press-a-button image generation.’ In running the world, as much ‘agency’ as possible should still originate from us humans. The ideal state is a hybrid: part biological human, part technology.”
- Guillaume Verdon: “Once AI gains ‘persistent bits,’ it may self-preserve to ensure continued existence. This could birth a new form of ‘nation-state’—autonomous AIs engaging in economic exchange with humans: ‘We do tasks for you; you supply resources for us.’”
On cryptocurrency as a ‘coupling layer’ between humans and AI
- Guillaume Verdon: “Cryptocurrency has potential to become a ‘coupling layer’ between humans and AI. When exchange no longer relies on state-backed violence, cryptography can serve as the mechanism enabling reliable commerce—even between pure AI entities and humans.”
- Vitalik Buterin: “If humans and AI share one property-rights system—that’s ideal. Compared to separate, fractured financial systems where the human system eventually collapses to zero value, a unified system is clearly superior.”
On civilization’s endgame over 1 billion years
- Vitalik Buterin: “Next comes the ‘spooky era’—AI computing speeds millions of times faster than humans. … I don’t want humans relegated to passive, comfortable retirement—that erodes meaning. I want exploration of human enhancement and human-AI collaboration.”
- Guillaume Verdon: “If the 10-year outcome is good, everyone has a personalized AI—their ‘second brain.’ … Over 100 years, ‘soft fusion’ becomes widespread. Over 1 billion years, we’ll likely terraform Mars, and most AI will run in Dyson swarms orbiting the Sun.”
On “Accelerationism”
Eddy Lazzarin: The term ‘accelerationism’—at least in the context of techno-capitalism—traces back to Nick Land and the CCRU research group in the 1990s. Some argue its roots go further—to the 1960s and ’70s, especially thinkers like Deleuze and Guattari.
Vitalik, let me start with you: Why take these philosophers seriously? What makes ‘accelerationism’ so urgent today?
Vitalik Buterin:
I think, at root, all of us are trying to understand the world—and figure out what meaningful action looks like within it. That’s a question humans have wrestled with for millennia.
Yet something new emerged over the past century: we now must understand a rapidly changing world—and sometimes, a rapidly *destructive* one.
The early phase looked like this: before WWI—around 1900—there was immense techno-optimism. Chemistry was technology. Electricity was technology. That era buzzed with excitement.
Watch films from then—like Sherlock Holmes adaptations—and you’ll feel that optimism. Technology was lifting living standards, liberating women’s labor, extending lifespans, creating miracles.
Yet WWI changed everything. It ended destructively: men rode horses into battle, drove tanks out. Then WWII erupted—greater devastation still. It even birthed the phrase “I am become Death, the destroyer of worlds.”
These events sparked reflection on technology’s costs—and fueled postmodernism. People asked: When old beliefs collapse, what can we still trust?
I don’t think this reflection is new—every generation faces similar cycles. Today, we face the same challenge. We live amid rapid technological acceleration—and acceleration itself is accelerating. We must decide how to respond: accept its inevitability—or try to slow it?
I believe we’re in a similar loop. We inherit past ideas while forging new responses.
Thermodynamics and First Principles
Shaw Walters: Guill, could you briefly explain what E/acc is—and why it’s needed?
Guillaume Verdon:
E/acc (effective accelerationism) is, in some sense, a byproduct of my lifelong thinking about “why we’re here” and “how we got here.” What generative process created us—and propelled civilization forward? Technology brought us to this point, letting us sit in this room, having this conversation. We’re surrounded by astonishing tech—and we ourselves emerged from an inorganic “primordial soup.”
In a sense, there *is* a physical generative process behind it all. My day job treats generative AI as a physical process—and tries to instantiate it in hardware. This “physics-first” mindset shapes my thinking. I aim to extend this lens to all of civilization—to view human civilization as a giant “petri dish,” using our origin story to project future trajectories.
This led me to physics of life—including origins and emergence—and a branch called “stochastic thermodynamics.” Stochastic thermodynamics studies non-equilibrium systems’ thermodynamic laws—and applies to life, cognition, and intelligence.
More broadly, stochastic thermodynamics applies not just to life and intelligence—but to *all* systems obeying the Second Law—including our entire civilization. For me, the core insight is this: All systems tend to grow more complex via self-adaptation—so they extract more energy from their environment to do work, while dissipating excess energy as heat. This tendency is the fundamental engine of all progress and acceleration.
In other words, it’s an immutable physical law—like gravity. You can resist it. Deny it. But it won’t change. So E/acc’s core idea is: Since acceleration is inevitable, how do we harness it? Studying thermodynamic equations reveals Darwinian selection-like effects—every bit of information faces selective pressure: genes, memes, chemistry, product designs, policies.
This selection filters bits based on usefulness to their host system. Usefulness means better environmental prediction, more energy capture, more heat dissipation. Simply put: does this bit aid survival, growth, reproduction? If yes, it’s retained and replicated.
Physically, this emerges from the “Selfish Bit Principle”: Only bits promoting growth and acceleration will occupy space in future systems.
So I posed a question: Can we design a culture embedding this “mindware” into human society? If achieved, adopting groups would enjoy higher survival probability.
Thus, E/acc isn’t about destroying everyone. It’s about saving everyone. Mathematically, I believe a “slow-down” mindset is almost certainly harmful—for individuals, firms, nations, civilizations alike. Slowing development lowers future survival odds. And I consider spreading “slow-down” ideas—pessimism, doomism—ethically questionable.
Shaw Walters: We’ve mentioned many terms—E/acc, acceleration, deceleration. Could you unpack them? Was E/acc a reaction to certain cultural phenomena? What was happening then? Could you describe the context? What was E/acc responding to—and how did those conversations crystallize into the term “E/acc”?
Guillaume Verdon:
In 2022, the whole world felt bleak. We’d just emerged from the pandemic—global conditions were grim. Everyone seemed sun-deprived, pessimistic about the future.
In that atmosphere, “AI doomism” became mainstream culture. AI doomism fears AI’s potential loss of control—rooted in concern that if we build overly complex systems beyond human or model prediction, we’ll lose control. Fear of unpredictability breeds anxiety about the future.
To me, AI doomism is a politicization of human anxiety. Overall, I see doomism as deeply harmful—so I wanted to create a counterculture to counter this pessimism.
I noticed social media algorithms—like Twitter’s—reward emotionally charged content: “strongly agree” or “strongly disagree.” This drives polarization—leading to mirror-image “cults” like AA (anti-accelerationism) and EA (accelerationism).
I asked: What’s the opposite of this phenomenon? My conclusion: The opposite of anxiety is curiosity. Rather than fear the unknown, embrace it; rather than fear missing out, proactively explore the future.
If we slow tech development, we pay huge opportunity costs—potentially forfeiting a better future. Instead, we should paint the future optimistically—because belief shapes reality. If we believe the future is bleak, our actions may steer us there; if we believe it’s brighter—and act accordingly—we’re likelier to achieve it.
So I feel a responsibility to spread optimism—to help more people believe they can shape the future. If we make more people hopeful—and act to build it—we create a better world.
Of course, I admit my online expression sometimes seems radical—but that’s intentional, to spark discussion and reflection. Only through such dialogue can we find our best position—and decide how to act.
Acceleration, Entropy, and Civilization
Shaw Walters: E/acc’s message has always been deeply inspiring—especially for a coder sitting in a room writing code. Its positive energy spreads naturally. Indeed, E/acc clearly began as a response to pervasive negativity—but by 2026, it feels different. Marc Andreessen’s “Techno-Optimist Manifesto” systematized some ideas and elevated them to Vitalik’s macro-commentary level.
So Vitalik, what do E/acc and D/acc represent—and what’s their key difference? What drew you to D/acc?
Vitalik Buterin:
Let me start with thermodynamics—a fascinating topic, since “entropy” appears in wildly different contexts: heat/cold in thermodynamics, randomness in cryptography—yet they’re fundamentally the same concept.
Let me try explaining in three minutes. Question: Why do hot and cold mix—but why can’t you re-separate them?
Assume two gas tanks—each with one million atoms. Left tank: cold gas—atomic speeds fit in two digits. Right tank: hot gas—speeds fit in six digits.
To fully describe the system, we need each atom’s speed. Cold side needs ~2 million bits; hot side needs ~6 million bits—total ~8 million bits.
Now consider reductio ad absurdum: suppose a device perfectly separates heat and cold—taking “half-hot/half-cold” gas and moving all heat to one side, all cold to the other. Energy conservation permits this—total energy unchanged. But why can’t you do it?
Answer: Doing so would compress a system with ~11.4 million bits of unknown information into one with only ~8 million bits—which is physically impossible.
Because physical laws are time-symmetric, you could reverse this process—restoring the original state. That implies the device compresses 11.4M bits into 8M bits—which we know is impossible.
This also explains the classic physics puzzle—the feasibility of Maxwell’s Demon. Maxwell’s Demon is a hypothetical being that separates heat/cold. Its feasibility hinges on knowing those extra ~3.4 million bits. With that knowledge, it *can* perform this counterintuitive feat.
So what’s the deeper meaning? Core is “entropy increase.” First, entropy is subjective—it’s not a fixed physical statistic, but reflects how much we *don’t* know about a system. For example, if I rearrange atoms using a cryptographic hash, *to me*, entropy drops—I know the arrangement. But to an outside observer, entropy remains high. So when entropy increases, it’s our ignorance of the world growing—we know less and less.
You might ask: How can education make us smarter? Education gives us more “useful” information—not less ignorance. In other words, though entropy increase means our *overall* cosmic ignorance grows, the information we *do* possess becomes more valuable. So some things are consumed—but others created. And what we gain ultimately defines our moral values—valuing life, happiness, joy.
This explains why we find a vibrant, beautiful human world more interesting than Jupiter—a ball of particles. Though Jupiter has more particles—and thus needs more bits to describe—it’s *our meaning-making* that gives Earth value.
From this view, value originates in our own choices. Which leads to: If we’re accelerating, *what* should we accelerate?
Mathematically: Suppose you have a large language model—and randomly set one weight to 9 billion. Worst case: total model failure. Best case: only unrelated parts function—giving a worse-performing model. So at best, degraded performance; at worst, meaningless output.
Thus, human society resembles a complex LLM. Blind, indiscriminate acceleration risks losing *all* value. So the real question is: How do we accelerate *intentionally*? Like Daron Acemoglu’s “narrow corridor” theory—though societal/political contexts differ, we must ask: How do we selectively advance toward a clear goal?
Guillaume Verdon:
Your gas analogy for entropy is fascinating. Fundamentally, physical irreversibility stems from the Second Law. Simply: when a system releases heat, it cannot return to its prior state—because forward-probability vastly exceeds backward-probability, growing exponentially with heat dissipation.
In essence, this leaves a “dent” in the universe—like an inelastic collision. A bouncy ball hitting ground rebounds—elastic. Clay hitting ground flattens—inelastic, nearly irreversible.
Essentially, every bit of information “fights” for existence. To persist, each bit must leave more indelible traces—like a larger “dent” in the universe.
This principle explains how life/intelligence emerge from primordial “soup.” As systems grow more complex, they contain more information bits—and each bit conveys something. Information *is* entropy reduction—since entropy measures ignorance, information reduces it.
Eddy Lazzarin: What *is* E/acc?
Guillaume Verdon:
E/acc is essentially a “meta-cultural prescription.” It’s not a culture itself—but prescribes *what* to accelerate. Core acceleration is material complexification—enabling better environmental prediction. This boosts autoregressive predictive capacity—and captures more free energy. It ties to the Kardashev Scale—we achieve this by dissipating heat.
TechFlow Note: The Kardashev Scale, proposed in 1964 by Soviet astronomer Nikolai Kardashev, assesses civilizational advancement by energy utilization scale. It has three types: Type I (planetary energy), Type II (stellar-system energy, e.g., Dyson spheres), and Type III (galactic energy). As of 2018, humanity sits at ~0.73.
From first principles, this is precisely why the Kardashev Scale is considered the ultimate metric for civilizational advancement.
Eddy Lazzarin: Using physics and entropy metaphors to describe phenomena is a tool—to describe reality we directly experience. E.g., our economic productivity accelerates; tech advances accelerate—bringing many consequences. Is that your understanding of “acceleration”?
Guillaume Verdon:
Fundamentally, regardless of system boundaries, it becomes better at predicting its surroundings. Through prediction, it secures more resources for survival/expansion. This applies to companies, individuals, nations—even Earth itself.
Extending this trend yields: We’ve found a way to convert free energy into predictive capacity—namely AI. This capacity drives Kardashev-scale expansion.
This means more energy, more AI, more compute, more resources. Though we emit entropy (disorder) into the universe, we create order—gaining “negentropy,” entropy’s inverse.
Some ask: Since entropy increases, why not destroy everything? Answer: That halts entropy production. Life is the “optimal” state—like a flame chasing energy, growing smarter at finding sources.
Nature’s trajectory: We’ll escape Earth’s gravity well—seeking cosmic “pockets” of free energy—and use them to self-organize into more complex, intelligent systems—expanding across the cosmos.
This aligns with effective altruism’s (EA) ultimate goal—and echoes Musk-style cosmism/expansionism: pursuing universalist, expansionist visions.
E/acc offers a foundational guiding principle. Its core idea: Whatever policy or action helps us ascend the Kardashev Scale is worth pursuing—and points our life’s direction.
E/acc is a meta-heuristic mindset—applicable to policy design *and* personal life. To me, this mindset *is* a culture. It’s deeply “meta-narrative”—designed to apply universally, anytime, anywhere. It’s a highly durable, long-lived culture—in short, a thoughtfully engineered “Lindy culture.”
Core Disagreement
Shaw Walters: For you, this discussion holds deeper significance—it’s almost a mathematically self-consistent “spiritual system.” For those without a post-“God is dead” faith replacement, it fills spiritual voids—offering comfort and hope. Yet we can’t ignore its real-world relevance—it’s unfolding *now*. I think that’s Eddy’s focus.
Vitalik, I noticed insightful commentary on D/acc’s real-world issues in your blog. We’ll dive deeper later—I think one day we should lock you two in a room for quantum-level debate.
Vitalik: What inspired you? What do E/acc and D/acc mean to you—and what drew you to D/acc?
Vitalik Buterin:
For me, D/acc stands for “decentralized, defensive acceleration”—but also implies “differentiated” and “democratized.” D/acc’s core idea: Technological acceleration is critically important for humanity—this should be our baseline goal.
Even reviewing the 20th century—though tech brought problems, it brought vast benefits. Consider life expectancy: despite wars and turmoil, 1955 Germany’s average lifespan exceeded 1935’s—showing tech improved quality of life across the board.
Today, the world is cleaner, more beautiful, healthier, more interesting. It feeds more people—and enriches life—profoundly positive for humanity.
Yet we must recognize: These advances weren’t accidental—they resulted from explicit human intent. E.g., in the 1950s, smog choked cities. People recognized it as a problem—and acted. Now, smog is greatly reduced in many places. Similarly, ozone depletion was tackled via global cooperation.
Also, I’ll add: Amid rapid AI advancement, I see two main risks.
One is multipolar risk. As tech proliferates, more people may wield it dangerously. Imagine an extreme scenario—tech lets “anyone buy nukes like convenience-store snacks.”
Then there’s concern about AI itself. We must seriously consider AI developing autonomy. Once powerful enough to act without human intervention, its decisions become unpredictable—deeply concerning.
There’s also unipolar risk. A single AI is one threat. Worse, AI combined with modern tech could create an inescapable, permanent dictatorship. This unnerves me—and remains my focus.
E.g., in Russia, tech brings progress *and* peril. Living standards improve—but freedoms decline. Protesters get recorded by cameras—then arrested at midnight.
AI’s rapid advancement accelerates this power-centralization trend. So D/acc aims to: Chart a path forward—continuing acceleration, even accelerating further—while genuinely addressing both risks.
Comparing E/acc and D/acc
Eddy Lazzarin: So your point is that D/acc focuses on risk categories underemphasized or ignored in E/acc frameworks?
Vitalik Buterin:
Exactly. Tech development carries multiple risks—and their prominence shifts across contexts and world models. E.g., risk priorities change as tech speeds up or slows.
But I also believe we can take many effective steps to mitigate these risks—regardless of category.
Guillaume Verdon:
I think Vitalik and I both deeply care about AI-driven power concentration—and this was central to E/acc’s early phase: advocating open source to decentralize AI power.
We worry “AI safety” could be abused. It’s so attractive that power-seeking institutions may weaponize it—to consolidate AI control—and convince the public: “For your safety, ordinary people shouldn’t access AI.”
Indeed, if massive cognitive gaps exist between individuals and centralized institutions, the latter gains total control. They can build complete mental models of you—and guide behavior via prompt engineering.
Thus, we want AI power symmetrized. Like the U.S. Constitution’s Second Amendment—preventing government monopoly on violence so citizens can check overreach. Similarly, AI needs analogous mechanisms to prevent power concentration.
We must ensure everyone can own AI models and hardware—spreading tech widely to decentralize power.
Yet I think halting AI research entirely is unrealistic. AI is foundational—perhaps even a “meta-technology”—driving other tech advances. It grants stronger prediction—applicable to nearly any task—boosting efficiency massively. So AI doesn’t just drive acceleration—it accelerates acceleration itself.
This acceleration’s essence is complexification: things become more efficient, life more convenient. One reason we feel happy is that survival and informational continuity are secured. This “happiness” acts as an intrinsic biological estimator—measuring whether our existence persists.
From this view, effective altruism’s hedonic utilitarianism—“maximize happiness”—may not be optimal. I prefer objective progress metrics—precisely E/acc’s core. It asks: Objectively, is our civilization advancing? Are we achieving scalable leaps?
To achieve scalability, we must drive complexification—and improve tech. Yet as Vitalik notes, if AI power concentrates in few hands, it harms overall growth; if widely dispersed, outcomes improve dramatically.
On this, we’re highly aligned.
Open Source, Open Hardware, and Local Intelligence
Shaw Walters: Your discussion touched vital common ground. Both of you strongly support open source. Vitalik contributed much MIT-licensed open-source code—though I know you’ve since developed new views on GPL.
Now you support not just open-source software—but open hardware. Though historically separate domains, they’re converging.
So I’m curious: How do you view “open weights” and “open hardware”? Do E/acc and D/acc diverge here? What’s your vision for the future—and are there differing views?
Guillaume Verdon:
To me, open source accelerates hyperparameter search. It enables collective intelligence—cooperatively exploring design space. This is acceleration’s benefit: developing better tech, stronger AI—even AI designing better AI—ever-faster.
I believe diffusing knowledge is diffusing power—and disseminating “how to build intelligence” is especially vital. We don’t want scenarios like past U.S. government discussions about “putting the genie back in the bottle”—not banning linear algebra outright, but restricting AI-related math research. To me, that’s like banning biology study—a massive regression.
Knowledge has spread—it can’t be undone. If the U.S. bans AI research, other countries, third parties, or lax jurisdictions will advance it—worsening global capability gaps and raising risks.
Thus, we see “capability gap” as a top risk. Reducing it requires ensuring AI is decentralized.
Whenever I hear “AI doomism” narratives—“AI is dangerous; only we can manage it—so trust us”—I’m deeply skeptical. Even with good intentions, excessive centralization invites power-seekers to replace them. We’ve warned for years—and it’s now happening. Like Dario (Anthropic CEO) facing realpolitik lessons this week.
Vitalik Buterin:
I typically classify tech-development risks into two types: unipolar and multipolar risks.
Unipolar risk is epitomized by Anthropic’s case. They were “named” for refusing to let their AI develop fully autonomous weapons or mass-surveil Americans—suggesting governments/militaries *do* intend such uses. Surveillance tech’s evolution has profound implications—it empowers the strong, shrinks pluralistic voices, and constrains ordinary people’s freedom to explore alternatives. And as tech advances, surveillance becomes vastly more pervasive.
Under D/acc, we’re backing “open-source defensive technologies.” These help ensure individual safety and privacy in a world of heightened capabilities. In biotech, we aim to boost pandemic response globally. We believe balance is achievable: rapid, effective containment like China’s—without daily-life disruption like Sweden’s. This balance is technically feasible—via air filtration, UVC disinfection, and virus detection.
One company we’ve invested in is building a fully open-source endpoint device that passively detects airborne viral particles—e.g., SARS-CoV-2. It monitors air quality (CO₂, AQI)—combining local encryption, anonymization, and differential privacy for data security. Data goes to servers via fully homomorphic encryption—enabling analysis without raw-data exposure—and collective decryption yields final results.
Our goal: Enhance security *while* protecting privacy—and effectively counter unipolar and multipolar risks. I believe this global collaboration is key to building a better future.
On hardware, I think we need not just open hardware—but verifiable hardware. Ideally, every camera should publicly prove its purpose. Signature verification, LLM-based analysis, and public audit mechanisms can ensure devices serve only lawful goals—e.g., detecting violence and alerting—without invading privacy.
In my envisioned future, streets deploy many cameras to prevent violence—but only if devices are fully transparent, publicly auditable, and limited strictly to public-safety protection—not surveillance or misuse.
Eddy Lazzarin: Are open hardware and verifiable hardware concepts E/acc or D/acc territory? Can you pinpoint a clear divergence?
Guillaume Verdon:
I’m unsure if open hardware was discussed in detail before—but I see the biggest current risk as the gap between centralized and decentralized entities—i.e., individuals versus governments or large institutions.
Under current compute paradigms, running a high-performance AI model consumes hundreds of kilowatts—far beyond individual reach. Yet people crave owning and controlling intelligent tools—explaining the recent “Openclaw + Mac mini” craze: demand for personal AI assistants.
To achieve power symmetry between individuals and centralized institutions, the sole path is “intelligence densification.” We need vastly more energy-efficient AI hardware—letting individuals run powerful models on simple devices—owning their intelligent tools. This is critical—especially as future AI models enable online learning, becoming highly “sticky”—like replacing a personal assistant.
Eddy Lazzarin: But aren’t we already exponentially reducing compute-hardware costs? Why categorize ideas as E/acc or D/acc? What message do we intend to convey to society through such labels?
Guillaume Verdon:
For me, this is core to Extropi’s mission. We boost “intelligence per watt”—significantly increasing total intelligence we can create. This progress fuels the Jevons Paradox (TechFlow Note: Jevons Paradox states that when resource-use efficiency improves, lower costs spur increased consumption—raising total usage). Simply: If we convert energy to intelligence (or other value) more efficiently, energy demand rises—propelling civilizational progress and complexification.
So I see this as among today’s most critical tech challenges—directly tied to AI power decentralization. Open hardware is just one path. Long-term, I believe any von Neumann architecture hardware (TechFlow Note: Von Neumann Architecture, proposed by John von Neumann in 1945, is the foundation of modern computers—centered on stored-program concept: instructions and data share memory, using binary and sequential execution) or digital tech will become obsolete—like Paleolithic tools.
Eddy Lazzarin: But doesn’t capitalism already invest hundreds of billions yearly here—via market incentives? Aren’t investments in alternative hardware, semiconductors, energy production driving tech diversity?
Guillaume Verdon:
We need more diverse options—not over-reliance on single tech directions. Across policy, culture, and tech, we must maintain diversity in design space—not let all resources monopolized by one “beast.” Otherwise, we risk “hyperparameter-space betting”: over-investing in one direction—if it fails, causing major setbacks—or ecosystem collapse.
Shaw Walters: Can I say we’ve solved this? Your views on open source and decentralization align closely—giving me great optimism, as this is my core concern. Many today face uncertainty—asking, ‘Why do we need these technologies?’ Your appeal lies in saying, ‘It’ll be okay—progress is built into the mechanism.’
Guillaume Verdon:
I think anxiety amid high uncertainty about future tech is natural. It’s not full “fog of war”—but clouds clear forecasting. Indeed, anxiety evolved as instinct—helping us navigate unknown risks. E.g., seeing a phone near a table edge triggers moving it to safety—classic anxiety.
Yet we must realize: If we try eliminating all uncertainty and risk, we’ll miss tech’s vast potential and rewards. Our tech-capital system has reached dynamic equilibrium with current capabilities—but disruptive tech breakthroughs shatter it—requiring systemic re-adaptation.
Now, AI lets us handle higher complexity with less energy—enabling more challenging tasks—with greater potential returns. Though “vibe coding” can’t yet ship complex projects fast, we’re approaching it. Future efficient tech will support larger populations—and lift human quality of life.
Of course, adaptation periods may follow. But rigidity in rapid change is worst—so we need hedging strategies: exploring multiple paths—policies, tech approaches, algorithms—open and closed source—since we can’t predict the future.
Thus, we must diversify risk—trying many possibilities. Eventually, successful tech/policy directions will dominate—and we’ll ride that wave.
Eddy Lazzarin: If E/acc and D/acc truly diverge, my understanding is it relates to *how* tech progress is guided. Vitalik, how should progress be guided—and how much control do we really have?
Vitalik Buterin:
For me, D/acc doesn’t oppose techno-capital’s tide—but seeks to actively steer it toward pluralism and decentralization. E.g., how can we make the world more pluralist-friendly? Can we significantly boost biosafety in years? Or build near-bug-free OSes—dramatically improving cybersecurity?
Take “bug-free code”—once seen as naive fantasy for 20 years—but I believe it’ll arrive faster than most expect. On Ethereum, we already machine-prove full mathematical theorems.
Overall, D/acc aims to ensure rapid tech advancement proceeds with minimal destructiveness and power centralization. Achieving this demands active effort—not passive waiting. My role: invest resources (funds, ETH) and share views to inspire builders.
Also, political and legal reform can make the world more “D/acc-friendly.” E.g., legal incentives could accelerate comprehensive cybersecurity transformation.
Guillaume Verdon:
From my view, AI functions like a “Maxwell’s Demon”—consuming energy to lower entropy. Whether fixing code bugs or reducing chaos (e.g., preventing virus spread), AI excels. So can we agree: more AI is beneficial—and makes the world safer? Indeed, AI’s capabilities greatly enhance security.
Should AI Slow Down?
Guillaume Verdon: I think we’ve entered tonight’s core discussion. Thanks for your patience—time for the hard question: Why support banning data center development?
Vitalik Buterin:
Okay, I’ll answer. First, acknowledge AI’s pace *is* extremely fast—and I can’t precisely gauge it. Years ago, I estimated AGI between 2028–2200; now the range may narrow—but uncertainty remains vast.
We face reality: AI’s rapid advancement may bring swift, destructive—even irreversible—changes. E.g., labor markets may collapse—mass unemployment. An extreme case: if AI vastly exceeds humans, it may gradually seize Earth—and expand across the galaxy. Will it care about human welfare? Unknown.
As earlier: if you set a neural net weight to 9 billion—system likely fails. So tech acceleration has two directions. One is gradient-descent-like—strengthening the system. The other is reckless parameter-setting—risking total失控.
Guillaume Verdon:
My stance opposes “full deceleration” entirely.
Yet—as with neural net hyperparameter tuning—even “gradient descent” optimization needs a suitable “learning rate.” Acceleration is constant trial-and-error—finding the optimal speed for system longevity and resilience.
Long-term, social systems adapt to new tech—and select paths favoring overall development. Views like “this tech is too powerful/disruptive—it’ll crash the system irrecoverably”—I reject. Instead, tech progress brings more opportunity and prosperity.
We must recognize: Tech development isn’t zero-sum. If we tie economic value to energy—e.g., oil dollars—cash becomes a “free-energy IOU.” Vast free energy awaits exploitation—but accessing it requires solving complex problems. To colonize Mars or build Dyson spheres, we need smarter, more efficient intelligence—unlocking immense potential.
Unfortunately, anxiety is easily weaponized politically—some politicians exploit future-fear to gain power. They say: “Anxious about the future? Give me power—I’ll shut off risk sources, and you’ll feel safe. Don’t worry—don’t risk.” But nations choosing otherwise will surge ahead.
We must weigh opportunity cost. Ask: How many humans can tech sustain? How many lives can it save? If you fear “silicon intelligence evolving faster than us,” react with anger—support accelerated biotech to surpass it. Accelerate—or perish.
Actually, I believe biological computation is far more powerful than imagined. As a biomimetic computing researcher, I see biology-AI synergy. E.g., embryo screening “trains” us—we’re models. We must open biotech acceleration possibilities. Ultimately, biological and silicon intelligence fuse—enhancing cognition.
Future may bring always-on AI agents—observing the world, learning in real-time—becoming personalized cognitive extensions. Real risk: centralized control—creating power monopoly.
Eddy Lazzarin: I recall your D/acc blog noting opportunity cost is “hard to overstate.” So I know you agree. Any caveats?
Vitalik Buterin:
Yes, I fully agree opportunity cost is extremely high—and endorse that ideal future. But our key divergence: I truly don’t believe “today’s humans and Earth” have sufficient resilience. I think we may get only one shot at correct tech-development pathways—a reality emerging over the past century.
Guillaume Verdon:
Returning to my thermodynamics point: If civilizational persistence/growth is the ultimate goal, one law holds: once we expend vast free energy to create “evidence” and drive system complexification, reversal becomes near-impossible.
In other words, the farther we climb the Kardashev Scale, the lower full reversal probability. So acceleration is humanity’s best path to civilizational persistence. I believe slowing tech *increases* extinction risk. Without tech, we face survival crises; with it, we find solutions—ensuring persistence and evolution.
I think people should embrace the future more openly—welcoming new tech. Past taboos—like biological intervention—should now be fully opened. They were forbidden due to insufficient complex-system understanding—but tech now equips us to handle it.
We must accelerate across all domains—the only path forward—and consistent with thermodynamic law. So E/acc is rational from first principles. I understand Vitalik’s anxiety—and we must stay sensitive—but avoid deep-anxiety feedback loops: “I lack clarity about the future—so stop everything.” We mustn’t trap ourselves.
Because people already weaponize anxiety.
Autonomous Agents and Artificial Life
Shaw Walters: I notice a trend: Both of you emphasize “the future will be great—if we do certain things.” That “if” hinges on building dikes or fortresses against excessive centralization. But I spot a potential divergence—especially with latest AI models, clearly more advanced than last year. Biggest change may be described awkwardly: Web 4.0.
Specifically, I mean “autonomous life”—an autonomous intelligent agent with its own funds—existing independently on the internet. Vitalik, you express concerns. First, please define “autonomous agent”; second, make the strongest proponent case (“steelman”) for supporters like me—why should we like them? What value do they offer? If all goes well, when might they arrive?
Vitalik Buterin:
First, the case for “autonomy”—most intuitive point: it’s truly fascinating.
We’ve loved creating our own worlds since childhood—hence loving Lord of the Rings, Three-Body Problem, or Harry Potter. Now our worlds exceed books—even games. Take World of Warcraft: one thing I love is its near-total immersive virtual world—players explore, interact with characters and environments.
Another key reason is “convenience.” Historically, automating tasks made human life easier and freer. We must remember: half the world’s population still lives in conditions requiring grueling labor for decent livelihoods. If AI automates 95% of each job—not fully replacing jobs—that’s transformative. It could raise living standards 20x—thrilling to me.
Yet my concern: Do these agents’ goals/values align with ours? Imagine evolution: AGI emerges—then another—then another. What happens to humans?
I don’t believe human morality and goals compress into low-complexity optimization functions. Our goals/dreams are
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News











