
Vitalik wrote a proposal teaching you how to secretly use large AI models
TechFlow Selected TechFlow Selected

Vitalik wrote a proposal teaching you how to secretly use large AI models
Vitalik believes that in the AI era, users should not have to surrender their identity to use an AI tool.
Author: TechFlow
Everyone worldwide is talking about AI, while discussions about crypto on social media timelines have grown noticeably quieter.
Meanwhile, ETH has been trading sideways near $2,000 for nearly two months, and seemingly few people care much anymore about what Vitalik says or does.
Yet recently, while scrolling through his X feed, I realized AI isn’t just influencing us—it’s influencing him too. Over the past month, a significant portion of his posts have centered on AI—down to specific technical proposals.
The most noteworthy is a proposal jointly published by Vitalik and Davide Crapis, the Ethereum Foundation’s AI lead, on ethresear.ch on February 11, titled “ZK API Usage Credits.”

In one sentence: Use zero-knowledge proofs to enable anonymous access to large AI models.
Today, whether you use ChatGPT or call Claude’s API, there’s only one way to pay:
Register an account, link an email, and bind a credit card.
Every conversation, every prompt—you send—is tied directly to your identity. The platform knows exactly what you asked, when you asked it, and how often.
Vitalik and Crapis’s proposal offers an alternative path:
- A user deposits funds—say, 100 USDC—into a smart contract.
- The contract registers this deposit on an encrypted on-chain list. Each time the user calls the API, they don’t need to reveal their identity—only generate a zero-knowledge proof.
- This proof convinces the service provider of two things: that the user is on the list, and that their balance suffices—but reveals nothing about which entry on the list they are.

The service provider receives payment and can prevent abuse—but never learns the user’s identity.
You can interpret this proposal as reflecting Vitalik’s belief that, in the AI era, users shouldn’t have to surrender their identity just to use an AI tool.
This proposal remains at the research stage and is still far from implementation; major AI model providers may also resist adopting such a model. Meanwhile, the discussion thread is filled with counterarguments and skepticism, with critics asserting AI model providers will always find ways to uncover users’ real identities.
Yet I believe the significance of this proposal lies not solely in whether it becomes reality.
Privacy has been Vitalik’s focus for a decade—from early support for Tornado Cash to championing zero-knowledge proofs as a core Ethereum technology path. That thread has never broken. But over the past few years, privacy within the crypto industry has lacked a sufficiently compelling narrative to anchor it.
AI fills that gap. When you share more with large language models daily than with anyone else, privacy becomes a genuine, urgent need.
Vitalik Embraces AI
From February to now, a substantial share of Vitalik’s X posts have revolved around AI—so consistently and densely that it’s clearly more than casual commentary.
Yesterday he posted a long thread describing a recent cryptography conference he attended. Attendees cared deeply about privacy, open source, and censorship resistance—but felt no attachment to blockchain whatsoever.

Among them, he conducted a thought experiment:
Forget “we’re the Ethereum community.” Start from scratch: Where is Ethereum *actually* most useful?
His answer: Ethereum’s most fundamental value lies in serving as a public bulletin board—a place where anyone can write, anyone can read, and no one can alter or delete anything.
Within the AI context, this may be the most important thing Vitalik has said in the past two years.
We’re entering an era of infinitely cheap generation. AI can mass-produce text, images, video—and even synthetic identities. When everything can be forged, what becomes scarce?
All these questions converge on one answer: a public, persistent, immutable data layer. And maintaining tamper-proof records is precisely what Ethereum does.
Over the past two years, criticism of Ethereum has boiled down to one question: What, exactly, can’t others replicate?
Now, Vitalik hasn’t answered that question head-on.
Yet over the past year, the Ethereum Foundation has quietly taken several notable steps: assembling a 50-person privacy team, establishing a ~50-person privacy research cluster, releasing the Kohaku privacy framework, appointing a dedicated AI lead—and listing institutional-grade privacy and faster transaction finality as top priorities in its 2026 roadmap.
Looking back at his intense output over the past month, nearly all of it centers on Ethereum’s privacy and efficiency challenges in the age of AI.
I think Vitalik is betting on one thing: the more powerful AI becomes, the more acute the demand for privacy and verification infrastructure grows. Whether Ethereum can meet that demand remains uncertain—but he’s clearly already chosen his table.
ETH remains stuck near $2,000. Most people still aren’t paying much attention to what he’s been saying lately.
But perhaps, looking back in a few years, this very period will be the one we should have paid attention to.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News













