
The Agent Era: The Clash and Symbiosis of AI and Crypto
TechFlow Selected TechFlow Selected

The Agent Era: The Clash and Symbiosis of AI and Crypto
In the future, agents will jointly shape the digital world in a new form alongside decentralized forces.
Author: Zeke, YBB Capital Researcher

1. The Fickleness of Attention
Over the past year, due to a lack of compelling narratives at the application layer unable to keep pace with rapid infrastructure development, the crypto space has gradually devolved into a game of competing for attention. From Silly Dragon to Goat, from Pump.fun to Clanker, the fickleness of attention has driven this competition inward and intensified. Starting with the most clichéd method—grabbing eyeballs for monetization—it quickly evolved into platform models where attention demanders and suppliers converge, and eventually gave rise to silicon-based entities as new content creators. Among the countless forms of meme coins, AI Agents have finally emerged as a rare consensus point between retail investors and VCs.

Attention is ultimately a zero-sum game, yet speculation can indeed catalyze uncontrolled growth. In our previous article on UNI, we revisited the dawn of blockchain's last golden era—the explosive rise of DeFi began with Compound Finance’s launch of liquidity provider (LP) mining. Moving in and out of yield farms offering APYs in the thousands or even tens of thousands was the most primitive form of on-chain博弈 during that period, despite ending in widespread farm collapses. Yet, the frenzy of gold rush miners did leave behind unprecedented liquidity on blockchains. DeFi eventually transcended pure speculation to mature into a full-fledged sector, meeting users’ financial needs across payments, trading, arbitrage, staking, and more. Similarly, AI Agents today are undergoing such a chaotic phase—we are now exploring how Crypto can better integrate with AI to ultimately elevate the application layer to new heights.
2. How Do Agents Achieve Autonomy?
In our previous article, we briefly introduced the origin of AI Memes—Truth Terminal—and shared visions for the future of AI Agents. This article focuses primarily on AI Agents themselves.
We begin with the definition of an AI Agent. "Agent" is a relatively old term in AI with no fixed definition, but it fundamentally emphasizes autonomy—any AI capable of perceiving its environment and acting accordingly can be considered an agent. In modern usage, AI Agent refers more precisely to intelligent agents: large language models (LLMs) equipped with systems designed to mimic human decision-making processes. Academia views this system as one of the most promising paths toward AGI (Artificial General Intelligence).
In early versions of GPT, we could clearly sense that LLMs felt human-like, yet when answering complex questions they often provided vague or inaccurate responses. The root cause lies in these models being based on probability rather than causality. Moreover, they lacked essential human capabilities such as tool use, memory, and planning—all of which AI Agents aim to address. Thus, we can summarize: AI Agent = LLM + Planning + Memory + Tools.
Large models based solely on prompts resemble static beings—they only come alive when prompted. In contrast, the goal of an intelligent agent is to emulate a truly dynamic human. Today’s ecosystem agents are mostly fine-tuned versions of Meta’s open-source Llama 70B or 405B models (differing in parameter count), endowed with memory and the ability to access tools via APIs. However, they still rely heavily on human input or assistance in other areas—including coordination with other agents. Hence, current agents mainly exist within social networks in the form of KOLs (Key Opinion Leaders). To make agents more human-like, integrating planning and action capabilities is crucial, with Chain of Thought (CoT) being particularly pivotal among planning components.
3. Chain of Thought (CoT)
The concept of Chain of Thought (CoT) first appeared in a 2022 paper by Google titled *“Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,”* which demonstrated that generating intermediate reasoning steps could significantly enhance model reasoning, helping them better understand and solve complex problems.

A typical CoT prompt consists of three parts: (1) a clear task description; (2) logical foundations or principles supporting problem-solving; and (3) concrete examples demonstrating solutions. This structured approach helps models grasp task requirements, enabling step-by-step logical reasoning toward accurate answers, thereby improving efficiency and accuracy. While CoT may not offer significant advantages for simple tasks, it dramatically enhances performance on complex ones—reducing error rates and increasing output quality through incremental resolution strategies.
In building AI Agents, CoT plays a critical role. Agents must interpret incoming information and make rational decisions accordingly. CoT provides a systematic thinking framework that enables agents to effectively process and analyze inputs, transforming insights into actionable guidance. This not only improves the reliability and efficiency of agent decisions but also increases transparency, making agent behavior more predictable and traceable. By breaking down tasks into smaller steps, CoT allows agents to carefully evaluate each decision point, minimizing errors caused by information overload. Furthermore, CoT makes the decision-making process more transparent, allowing users to better understand the rationale behind agent actions. During environmental interactions, CoT enables agents to continuously learn new information and adapt their behavioral strategies.
As an effective strategy, CoT not only boosts the reasoning power of large language models but also plays a vital role in constructing smarter, more reliable AI Agents. Leveraging CoT, researchers and developers can create intelligent systems highly adaptable to complex environments with strong autonomy. In practical applications, CoT demonstrates unique strengths—especially in handling complex tasks—by decomposing them into manageable sub-steps, thus enhancing solution accuracy while improving model explainability and controllability. This stepwise methodology greatly reduces the risk of erroneous decisions under high complexity or data saturation. Meanwhile, it increases the overall traceability and verifiability of solutions.
The core function of CoT lies in integrating planning, action, and observation, bridging the gap between reasoning and execution. This cognitive pattern allows AI Agents to anticipate potential anomalies and devise countermeasures, while simultaneously accumulating new knowledge and validating predictions through real-world interaction, forming updated reasoning bases. CoT acts like a powerful engine of precision and stability, enabling AI Agents to maintain high operational efficiency even in complex environments.
4. The Right Kind of Pseudo-Demand
Which aspects of the AI tech stack should Crypto actually integrate with? In last year’s article, I argued that decentralizing computing power and data is key to reducing costs for small businesses and individual developers. In Coinbase’s recently published breakdown of the Crypto x AI landscape, we see a more detailed segmentation:
(1) Compute Layer (networks focused on providing GPU resources for AI developers);
(2) Data Layer (networks enabling decentralized access, orchestration, and validation of AI data pipelines);
(3) Middleware Layer (platforms or networks supporting the development, deployment, and hosting of AI models or agents);
(4) Application Layer (user-facing products leveraging on-chain AI mechanisms, both B2B and B2C).
Each of these layers carries grand visions—ultimately aiming to challenge Silicon Valley giants dominating the next era of the internet. As I asked last year: must we accept exclusive control over compute and data by a few tech titans? Their closed-source large models operate as black boxes. Science, humanity’s most trusted religion today, faces a future where every sentence generated by these models might be treated as truth—but how do we verify that truth? According to Silicon Valley’s vision, agents will eventually wield unimaginable permissions—such as access to your wallet for payments or control over your devices. How can we ensure no malicious intent exists?
Decentralization is the only answer. But sometimes, we must realistically assess: who will actually pay for these grand visions? Previously, we could overlook commercial sustainability, using tokens to offset idealism-driven imbalances. Today’s situation is far more severe—Crypto x AI must be grounded in reality. For instance, how should supply and demand be balanced in the compute layer given performance losses and instability, to achieve competitiveness against centralized cloud providers? How many real users will data-layer projects attract? How can we verify the authenticity and validity of their data? Who exactly needs this data? The same scrutiny applies to the remaining two layers. In this era, we don’t need so many seemingly correct pseudo-demands.
5. Memes Have Spawned SocialFi
As I mentioned earlier, memes have rapidly evolved into a Web3-native form of SocialFi. Friend.tech fired the starting shot for this wave of social apps, but unfortunately failed due to its hasty token design. Pump.fun proved the viability of a purely platform-based model—with no token issuance and no rigid rules. Demanders and suppliers of attention converge seamlessly—you can post memes, go live, launch tokens, comment, trade—all freely. Pump.fun simply charges a service fee. This mirrors the attention economy model of platforms like YouTube and Instagram, except the revenue model differs, and gameplay-wise, Pump.fun is far more Web3-native.

Base’s Clanker represents the culmination of this evolution. Benefiting from an integrated ecosystem built directly by the chain, Base supports its own social dApps, creating a complete internal loop. AI Agent Memes represent Meme Coins 2.0. Humans crave novelty, and Pump.fun now finds itself under intense scrutiny. Trend-wise, it's only a matter of time before silicon-based imagination replaces carbon-based crude humor.
I've mentioned Base countless times already—only the context changes each time. Chronologically, Base has never been first to market, yet it consistently ends up winning.
6. What Else Can Agents Be?
Pragmatically speaking, AI Agents are unlikely to become decentralized anytime soon. From the perspective of traditional AI development, achieving agent functionality isn't simply solved by decentralizing inference or open-sourcing models. Agents require integration with various APIs to access Web2 content, entail high operational costs, and typically depend on humans to design thought chains and coordinate multi-agent collaboration. We’ll endure a long transitional period before arriving at a suitable hybrid model—perhaps something akin to UNI. Still, as in my previous article, I believe agents will profoundly impact our industry, much like CEXs—flawed, yet undeniably important.
Last month’s survey paper *“AI Agents”* from Stanford and Microsoft extensively covered agent applications in healthcare, robotics, and virtual worlds. Its appendix already includes numerous experimental cases where GPT-4V functions as an agent involved in top-tier AAA game development.
We shouldn’t rush the pace of integrating agents with decentralization. Instead, I hope agents first fill the gaps in grassroots capability and speed. We have vast narrative ruins and empty metaverses waiting to be filled. At the right stage, we can then consider how to turn them into the next UNI.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News












