
YC Interview: The Founder of DeepMind Is Now Awaiting AI’s “Einstein Moment”
TechFlow Selected TechFlow Selected

YC Interview: The Founder of DeepMind Is Now Awaiting AI’s “Einstein Moment”
“Problems related to continual learning, long-horizon reasoning, and certain aspects of memory remain unsolved—AGI must solve them all.”
Compiled & Translated by TechFlow
Guest: Demis Hassabis (Founder of DeepMind, Nobel Laureate in Chemistry 2024, Head of Google DeepMind)
Moderator: Gary Tan
Podcast Source: Y Combinator
Original Title: Demis Hassabis: Agents, AGI & The Next Big Scientific Breakthrough
Air Date: April 29, 2026
Editor’s Note
Demis Hassabis, CEO of Google DeepMind and Nobel Laureate in Chemistry, joined Y Combinator to discuss key remaining milestones on the path to AGI, advice for founders on staying ahead, and where the next major scientific breakthrough might emerge. The most practical insight for deep-tech founders is this: if you launch a ten-year deep-tech project today, you must factor in the arrival of AGI. He also revealed that Isomorphic Labs—the AI-driven drug discovery company spun out from DeepMind—is about to announce major news.

Key Quotes
AGI Roadmap and Timeline
- “The existing technical components will almost certainly form part of the final AGI architecture.”
- “Problems around continual learning, long-horizon reasoning, and certain aspects of memory remain unsolved—AGI must solve them all.”
- “If your AGI timeline aligns with mine—around 2030—and you’re launching a deep-tech project today, you must account for AGI arriving midway through.”
Memory and Context Window
- “The context window is roughly analogous to working memory. Human working memory holds only about seven items on average, yet we now have context windows spanning millions or even tens of millions of tokens. But the issue is we dump everything into it—including irrelevant or incorrect information—a rather brute-force approach.”
- “Storing every token from a live video stream would exhaust a one-million-token context window in just ~20 minutes.”
Flaws in Reasoning
- “I like testing Gemini at chess. Sometimes it recognizes a move is bad but can’t find a better one—and ends up circling back to make that same bad move. A precise reasoning system shouldn’t behave this way.”
- “It can solve IMO gold-medal-level problems, yet commit elementary-school arithmetic errors when the same question is rephrased. Something seems missing in its introspection over its own thought process.”
Agents and Creativity
- “To reach AGI, you need a system that proactively solves problems for you. Agents are that path—and I think we’ve barely begun.”
- “I haven’t yet seen anyone build a top-charting AAA game using vibe coding. Given current effort levels, it should be possible—but hasn’t happened. That suggests something is still missing—either in tools or workflows.”
Distillation and Small Models
- “Our hypothesis is that, six months to a year after a cutting-edge Pro model launches, its capabilities can be distilled into extremely compact models runnable on edge devices. We haven’t yet hit any theoretical ceiling on information density.”
Scientific Discovery and the ‘Einstein Test’
- “I sometimes call it the ‘Einstein Test’: Can you train a system using only knowledge available in 1901, then have it independently derive Einstein’s 1905 breakthroughs—including special relativity? Once achieved, these systems will be close to inventing genuinely novel concepts.”
- “Solving a Millennium Prize Problem would already be extraordinary. Harder still is proposing an entirely new set of Millennium Prize Problems—ones so profound that leading mathematicians deem them equally worthy of a lifetime’s pursuit.”
Advice for Deep-Tech Founders
- “Pursuing hard problems versus easy ones isn’t meaningfully different in difficulty—only in *how* they’re hard. Life is short; invest your energy in things no one else will do if you don’t.”
The Path to AGI
Gary Tan: You’ve thought about AGI longer than almost anyone. Looking at current paradigms, how much of AGI’s final architecture do we already possess—and what’s fundamentally missing?
Demis Hassabis: Large-scale pretraining, RLHF, chain-of-thought reasoning—we’re confident these will be integral parts of AGI’s final architecture. These techniques have already proven themselves repeatedly. I simply can’t imagine discovering two years from now that this entire trajectory was a dead end—it wouldn’t make sense. Yet atop what we already have, there may still be one or two missing pieces: continual learning, long-term reasoning, and certain aspects of memory remain unresolved. AGI must solve all of them. Perhaps incremental innovations built on existing tech will suffice—or perhaps one or two critical breakthroughs remain. I doubt more than one or two exist. My personal assessment of whether such critical gaps exist sits at roughly 50/50. So at Google DeepMind, we pursue both paths in parallel.
Gary Tan: I work extensively with Agent systems—and what shocks me most is that the underlying weights remain identical across iterations. Continual learning thus becomes especially intriguing, because right now we’re essentially jury-rigging it—like those “nighttime dreaming cycles” people talk about.
Demis Hassabis: Yes, those dreaming cycles are fascinating. We’ve long considered this problem in the context of integrating episodic memory. My PhD research focused on how the hippocampus elegantly integrates new knowledge into existing frameworks. The brain does this superbly—especially during sleep, particularly REM sleep, replaying salient experiences to extract lessons. Our earliest Atari program, DQN (DeepMind’s 2013 Deep Q-Network—the first deep reinforcement learning system to achieve human-level performance on Atari games), succeeded partly via experience replay—a technique directly inspired by neuroscience, replaying successful trajectories. That was 2013—ancient history in AI terms—but absolutely pivotal at the time.
I agree with you: today we *are* jury-rigging it—dumping everything into the context window. It feels wrong. Even though machines aren’t biological brains—and theoretically can support million- or even ten-million-token context windows with perfect recall—the cost of retrieval remains real. At the moment a concrete decision is needed, identifying truly relevant information isn’t trivial—even if you’ve stored everything. So I believe memory remains a rich frontier for innovation.
Gary Tan: Honestly, a million-token context window is far larger than I’d anticipated—and enables many new applications.
Demis Hassabis: It’s large enough for most intended use cases. But consider: the context window is roughly analogous to working memory. Humans hold only ~seven items in working memory, while we now operate with contexts spanning millions or even tens of millions of tokens. The problem is we cram *everything* into it—including unimportant or erroneous information—a rather crude strategy. And if you’re processing live video streams, naively storing every token, a million-token context lasts only ~20 minutes. But to understand your life circumstances over weeks or months? That falls far short.
Gary Tan: DeepMind has historically invested deeply in reinforcement learning and search. How deeply is that philosophy embedded in building Gemini today? Is reinforcement learning still underappreciated?
Demis Hassabis: It likely *is* underappreciated. Attention to it ebbs and flows. From Day One, DeepMind has built Agent systems. All our work on Atari and AlphaGo fundamentally involved reinforcement learning Agents—systems capable of autonomous goal achievement, decision-making, and planning. Of course, we chose games initially because their complexity was bounded—and then progressively tackled harder games, like AlphaStar after AlphaGo. We essentially worked through the entire spectrum of games.
The next question became: can we generalize these models into world models or language models—not just game-specific ones? That’s precisely what we’ve done over the past few years. Today’s leading models—especially their chain-of-thought reasoning—represent a return to ideas pioneered by AlphaGo. Much of our earlier work remains highly relevant; we’re revisiting old ideas at larger scale and greater generality—including Monte Carlo tree search and other RL methods. AlphaGo and AlphaZero’s core insights are profoundly connected to today’s foundation models—and I believe much of the progress over the next few years will stem from this lineage.
Distillation and Small Models
Gary Tan: Right now, greater intelligence demands larger models—but distillation advances mean smaller models can become remarkably fast. Your Flash models are powerful, delivering ~95% of frontier-model capability at one-tenth the cost. Correct?
Demis Hassabis: That’s among our core strengths. You must first build the largest models to attain frontier capabilities. One of our greatest advantages is rapidly distilling and compressing those capabilities into ever-smaller models. Distillation itself was pioneered by us—and we remain world-leading here. We also have strong business incentives: we’re arguably the world’s largest AI application platform, with AI Overviews, AI Mode, and Gemini integrated across *every* Google product—from Maps to YouTube—reaching billions of users across dozens of billion-user products. These demand extreme speed, efficiency, low cost, and minimal latency. That drives us relentlessly to optimize Flash and even smaller Flash-Lite models—and I hope this ultimately benefits users across diverse workloads.
Gary Tan: I’m curious how intelligent small models can ultimately become. Does distillation have limits? Could a 50B- or 400B-parameter model match today’s largest frontier models in intelligence?
Demis Hassabis: I don’t believe we’ve yet hit any fundamental information-theoretic limit—nor does anyone know if such a ceiling exists. Perhaps someday we’ll encounter an information-density ceiling—but for now, our assumption is that within six months to a year of a cutting-edge Pro model’s release, its capabilities can be compressed into models small enough to run on edge devices. You see this already in Gemma models: our Gemma 4 performs exceptionally well at its size class, leveraging extensive distillation and small-model optimization. So I truly see no theoretical limits—and feel we’re far from any.
Gary Tan: There’s an astonishing phenomenon today: engineers can now accomplish ~500–1,000× more work than six months ago. Some people in this room are doing ~1,000× the work of a Google engineer in the early 2000s. Steve Yegge discussed this.
Demis Hassabis: I find it exhilarating. Small models serve multiple purposes. First, lower cost—and speed brings additional benefits. In coding or similar tasks, faster iteration accelerates progress, especially when collaborating with systems. A rapid system—even if only 90–95% as capable as the frontier—often delivers more net value due to dramatically improved iteration speed.
A second major direction is deploying models on edge devices—not just for efficiency, but for privacy and security. Consider devices handling highly sensitive personal data—or robots. For a home robot, you’d want a locally-run, efficient, yet powerful model—delegating specific tasks to cloud-based large models only when necessary. Audio and video streams processed locally, data retained locally—I envision this as an ideal end state.
Memory and Reasoning
Gary Tan: Returning to context and memory: models are currently stateless. If they gained continual learning, what would developer experience look like—and how would you guide such models?
Demis Hassabis: This is a fascinating question. The lack of continual learning is a key bottleneck preventing Agents from completing full tasks. Current Agents excel at local subtasks—you can chain them together for impressive results—but they struggle to adapt to *your specific environment*. That’s why they’re not yet truly “fire-and-forget”: they need to learn *your* context. Solving this is essential for full general intelligence.
Gary Tan: Where does reasoning stand? Models exhibit strong chain-of-thought reasoning—but still falter on errors even bright undergraduates wouldn’t make. What specifically needs fixing—and what reasoning advances do you anticipate?
Demis Hassabis: There’s vast room for innovation in reasoning paradigms. What we do remains quite crude and brute-force. Many improvements are possible—e.g., monitoring chain-of-thought processes and intervening mid-thought. I often feel our systems—and competitors’—overthink and get stuck in loops.
I enjoy testing Gemini at chess. Interestingly, all leading foundation models perform quite poorly at chess—a revealing domain, since it’s well-understood and lets me quickly assess whether reasoning has gone astray. We observe cases where Gemini considers a move, recognizes it’s bad, fails to find a better alternative—and circles back to play that same bad move. A precise reasoning system shouldn’t behave this way.
This stark disparity persists—but fixing it may require only one or two adjustments. Hence the phenomenon of “jagged intelligence”: solving IMO gold-medal problems on one hand, yet committing elementary arithmetic errors when questions are rephrased. Something appears missing in its introspection over its own thought process.
The Real Capabilities of Agents
Gary Tan: Agents are a big topic. Some call it hype. Personally, I think we’re just beginning. What’s DeepMind’s internal assessment of Agent capabilities—and how does it differ from external hype?
Demis Hassabis: I agree—we’re just beginning. To achieve AGI, you need a system that proactively solves problems for you. This has always been clear to us. Agents represent that path—and I think we’ve barely started. Everyone is experimenting with how to make Agents collaborate effectively. We’ve done extensive personal experimentation—and many here likely have too. How do we embed Agents into workflows—not as nice-to-have enhancements, but as engines performing foundational work? We’re still in experimental phase. Only in the past two or three months have we begun identifying truly high-value use cases. The technology has finally matured beyond toy demos—delivering real time and efficiency gains.
I often see people launching dozens of Agents for dozens of hours—but I’m uncertain the output justifies that investment.
We haven’t yet seen anyone build a top-charting AAA game using vibe coding. I’ve written some myself—and many here have built promising small demos. I can now prototype a *Theme Park*-style game in half an hour—something that took me six months at age 17. I suspect spending an entire summer could yield something truly extraordinary. Yet craftsmanship, human soul, and taste remain essential—you must infuse these into anything you build. Remarkably, no child has yet shipped a game selling one million copies—yet given current tooling, it *should* be possible. So something’s still missing—perhaps in workflow, perhaps in tools. I expect such achievements within the next 6–12 months.
Gary Tan: How automated will this be? I don’t expect fully autonomous outcomes immediately. A more likely path is people first achieving 1,000× efficiency gains—and then someone using these tools to ship a hit app or game—after which more automation follows.
Demis Hassabis: Yes—that’s exactly what you’ll see first.
Gary Tan: Partly because some *are* doing this—but choose not to publicly disclose how much Agents assist them.
Demis Hassabis: Possibly. But let me address creativity. I often cite AlphaGo’s Move 37 in Game 2. For me, that moment was the catalyst—I launched scientific projects like AlphaFold immediately afterward. We began AlphaFold the day after returning from Seoul—ten years ago. I’m visiting Korea now to celebrate AlphaGo’s tenth anniversary.
But Move 37 alone wasn’t enough. It was cool and useful—but can this system *invent Go itself*? If you give it a high-level description—e.g., “a game whose rules take five minutes to learn but a lifetime to master; aesthetically elegant; playable in an afternoon”—and it returns Go as the answer? Today’s systems cannot do this. Why not?
Gary Tan: Someone in this room might already be able to.
Demis Hassabis: If so, the answer isn’t that the system lacks capability—but that *we’re using it incorrectly*. That may be the right answer. Perhaps today’s systems *already* possess this ability—but require a uniquely gifted creator to drive them, providing the project’s soul while merging seamlessly with the tools—becoming almost one with them. If you immerse yourself in these tools daily and possess deep creativity, you may produce unimaginable results.
Open Source and Multimodal Models
Gary Tan: Let’s shift to open source. Gemma’s recent release enables powerful models to run locally. What’s your view? Will AI become user-controlled—rather than residing primarily in the cloud? Will this change who can build products with these models?
Demis Hassabis: We’re staunch supporters of open source and open science. As with AlphaFold, we released everything freely. Our scientific work continues to appear in top-tier journals. With Gemma, our goal was to create world-leading models at comparable sizes. Gemma downloads have already reached ~40 million—just two and a half weeks after launch.
I also believe Western tech stacks matter in open source. China’s open-source models are excellent—and currently lead the field—but we believe Gemma is highly competitive at its size class.
There’s also a resource constraint: no one has spare compute to train two full-scale frontier models. So our current strategy is: edge models—for Android, glasses, robots, etc.—should be open, because once deployed on devices, they’re inherently exposed anyway. We unify our openness strategy at the nanoscale—a strategically sound decision.
Gary Tan: Before coming onstage, I demoed my AI operating system—interacting with Gemini directly via voice. I was nervous demoing it to you—but it actually worked! Gemini was built multimodally from inception. Having used many models, I can say none matches Gemini’s depth of voice-to-model interaction, tool-calling capability, and contextual understanding.
Demis Hassabis: Correct. One underappreciated advantage of the Gemini series is that it was designed multimodally from day one. This made initial development harder than text-only approaches—but we believed the long-term payoff would be substantial, and it’s already materializing. For instance, on world models, we’ve built Genie (DeepMind’s generative interactive environment model) atop Gemini. Similarly, Gemini Robotics builds on multimodal foundation models—and our multimodal edge becomes a competitive moat. We’re increasingly deploying Gemini across Waymo (Alphabet’s autonomous driving company).
Imagine a digital assistant following you into the real world—on your phone or glasses—understanding your physical surroundings and environment. Our systems excel here. We’ll continue investing heavily—and I believe our leadership in this space is substantial.
Gary Tan: Inference costs are falling rapidly. When inference becomes effectively free, what becomes possible—and will your team’s optimization priorities shift?
Demis Hassabis: I’m not sure inference will ever become truly free—Jevons’ Paradox looms large (where efficiency gains increase total consumption). I suspect everyone will eventually consume all available compute. Imagine millions of Agents collaborating—or a small group thinking along multiple pathways simultaneously before integrating results. We’re exploring all these directions—and they’ll all consume available inference resources.
On energy: if we solve just a few challenges—controlled nuclear fusion, room-temperature superconductivity, optimal batteries—I believe materials science will drive energy costs toward zero. But bottlenecks remain in chip fabrication and other physical processes—at least for decades. So inference quotas will persist—and efficient usage remains essential.
The Next Scientific Breakthrough
Gary Tan: Fortunately, small models are growing smarter. Many founders here work in biotech and biology. AlphaFold 3 now extends beyond proteins to broader classes of biomolecules. How far are we from modeling complete cellular systems—and is this a qualitatively different challenge?
Demis Hassabis: Isomorphic Labs is progressing exceptionally well. AlphaFold addresses only one step in drug discovery—we’re now tackling adjacent biochemical research, designing compounds with desired properties, and major announcements are imminent.
Our ultimate goal is a complete virtual cell: a fully functional cellular simulator you can perturb, whose outputs closely match experimental results and deliver practical utility. You could skip vast search steps—and generate abundant synthetic data to train other models predicting real-cell behavior.
I estimate ~10 years until a full virtual cell. On DeepMind’s science side, we’re starting with the virtual nucleus—since the nucleus is relatively self-contained. The key to such problems is isolating a slice of appropriate complexity: sufficiently self-contained that inputs and outputs can be reasonably approximated—then focusing on that subsystem. The nucleus fits this criterion well.
Another issue is insufficient data. I’ve spoken with top scientists in electron microscopy and other imaging fields. Non-destructive live-cell imaging would be revolutionary—converting it into a vision problem, which we know how to solve. But to my knowledge, no technology yet achieves nanoscale-resolution imaging of *living*, dynamic cells without destroying them. Static images at that resolution already exist—and are incredibly detailed—exciting, but insufficient to transform it directly into a vision problem.
So there are two paths: one hardware- and data-driven; the other building better learnable simulators for these dynamical systems.
Gary Tan: You don’t focus solely on biology. Materials science, drug discovery, climate modeling, mathematics—if forced to rank them, which scientific field will be most transformed in the next five years?
Demis Hassabis: Every field is thrilling—which is why this has always been my deepest passion and the reason I’ve pursued AI for over 30 years. I’ve long believed AI will be science’s ultimate tool—advancing scientific understanding, discovery, medicine, and our comprehension of the universe.
We originally framed our mission in two steps: first, solve intelligence—i.e., build AGI; second, use it to solve everything else. We later refined this phrasing because people asked, “Do you really mean *everything*?” Yes—we do. People are beginning to grasp what that implies. Specifically, I mean tackling what I call “root-node problems”—scientific domains whose breakthrough unlocks entirely new branches of discovery. AlphaFold exemplifies this. Over three million researchers worldwide—nearly every biologist—now uses AlphaFold. Pharma executives tell me nearly every future drug will leverage AlphaFold somewhere in its discovery pipeline. We’re proud of this impact—and this is the kind of influence we hope AI delivers. But this is merely the beginning.
I can’t conceive of any scientific or engineering field where AI won’t help. The fields you mentioned are roughly at their “AlphaFold 1 moment”—promising results, but not yet conquering their grand challenges. We’ll have significant progress to report across all these areas—from materials science to mathematics—within the next two years.
Gary Tan: It feels Promethean—granting humanity a wholly new capability.
Demis Hassabis: Exactly. And—as the Prometheus myth warns—we must handle this power with great care: how it’s used, where it’s applied, and the risks of misuse.
Lessons from Success
Gary Tan: Many here are founding AI-for-science companies. What distinguishes startups genuinely advancing the frontier from those merely wrapping foundation models in APIs and branding themselves “AI for Science”?
Demis Hassabis: I imagine myself sitting where you are—in Y Combinator reviewing pitches. One thing you must do is anticipate AI’s trajectory—which is hard in itself. But combining AI’s evolution with another deep-tech domain presents enormous opportunity. Whether materials, medicine, or other profoundly difficult scientific fields—especially those involving the atomic world—there are no shortcuts foreseeable. These fields won’t be disrupted overnight by the next foundation model update. If you seek defensibility, this is where I’d recommend focusing.
I’ve always favored deep tech. Truly enduring, valuable things aren’t easy. I’m drawn to deep tech. In 2010, AI *was* deep tech—investors told me, “We already know this doesn’t work,” and academia dismissed it as a niche 1990s idea that had failed. But if you believe in your idea—why *this time* is different, what unique combination of background you bring—you ideally combine expertise in ML and applications—or can assemble such a founding team. That’s where immense impact and value lie.
Gary Tan: This insight is vital. Once something succeeds, it seems inevitable—but before success, everyone opposes you.
Demis Hassabis: Absolutely—so you must pursue what truly excites you. For me, I’d do AI regardless of circumstance. I decided as a child this was the most impactful pursuit imaginable—and history has borne that out. Though it might not have; perhaps we’re 50 years early. It’s also the most fascinating thing I can imagine. Even if we were still in a garage, with AI unrealized, I’d find a way to continue. Maybe I’d return to academia—but I’d keep going somehow.
Gary Tan: AlphaFold exemplifies pursuing a direction and betting correctly. What makes a scientific field ripe for an AlphaFold-style breakthrough? Are there patterns—e.g., a specific type of objective function?
Demis Hassabis: I should write this down someday. From AlphaGo, AlphaFold, and all Alpha projects, I’ve learned our current tech works best under three conditions. First, the problem has an enormous combinatorial search space—the bigger, the better—exceeding brute-force enumeration or specialized algorithms. Both Go’s move space and protein conformation spaces dwarf the number of atoms in the universe. Second, you can clearly define an objective function—e.g., minimizing protein free energy or winning at Go—enabling gradient ascent. Third, sufficient data exists—or a simulator can generate abundant in-distribution synthetic data.
If these three hold, today’s methods can go remarkably far—to find that proverbial “needle in a haystack.” Drug discovery follows the same logic: a compound exists that treats this disease without side effects—if physics permits it—so the sole challenge is finding it efficiently and feasibly. AlphaFold first proved such systems *can* locate that needle in massive search spaces.
Gary Tan: Let’s zoom out. We’ve discussed humans using these methods to create AlphaFold—but there’s a meta-layer: humans using AI to explore possible hypothesis spaces. How far are we from AI systems performing genuine scientific reasoning—not just pattern matching on data?
Demis Hassabis: I think we’re very close. We’re building such general systems—like our AI co-scientist and AlphaEvolve algorithms—which go further than base Gemini. All leading labs are exploring this.
Yet personally, I haven’t yet seen a *true*, *major* scientific discovery produced by these systems. I believe it’s imminent—and tied to creativity: breaking beyond known boundaries. At that level, it’s no longer pattern matching—because no patterns exist. Nor is it pure extrapolation—it’s analogical reasoning, which current systems either lack or which we haven’t yet harnessed correctly.
In science, I often cite a benchmark: can it propose a genuinely interesting hypothesis—not just verify one? Verification itself can be monumental—e.g., proving the Riemann Hypothesis or solving a Millennium Prize Problem—but perhaps we’re only years away from that.
Harder still is proposing a *new set* of Millennium Prize Problems—so profound that top mathematicians deem them equally worthy of a lifetime’s pursuit. That’s another order of magnitude harder—and we don’t yet know how. But I don’t believe it’s magic. I trust these systems will ultimately achieve it—perhaps needing just one or two more pieces.
A test we can apply is what I call the “Einstein Test”: can you train a system using only knowledge available in 1901—and have it independently derive Einstein’s 1905 breakthroughs, including special relativity and his other papers that year? I think we should seriously run this test repeatedly—and see when it succeeds. Once it does, these systems will be close to inventing genuinely novel concepts.
Founding Advice
Gary Tan: Final question. Many here have deep technical backgrounds—and aspire to build at your scale. You’re among the world’s largest AI research organizations. Coming from the AGI research front lines—what’s one thing you know now that you wish your 25-year-old self had known?
Demis Hassabis: We’ve already touched on part of it. You’ll find pursuing hard problems versus easy ones isn’t meaningfully different in difficulty—only in *how* they’re hard. Different endeavors have different kinds of hardness. But life is short and energy finite—so invest your vitality in things no one else will do if you don’t. Use that as your filter.
Second, cross-domain combinations will become increasingly common over the next few years—and AI will make crossing domains easier.
Finally, this depends on your AGI timeline. Mine is ~2030. If you launch a deep-tech project today, it typically implies a decade-long journey—so you must plan for AGI arriving midway. What does that mean? Not necessarily negative—but you must factor it in. Can your project leverage AGI? How will AGI systems interact with your project?
Returning to AlphaFold and general AI systems: I foresee scenarios where Gemini, Claude, or similar general systems invoke AlphaFold-like specialized tools. I don’t believe we’ll cram everything into one monolithic “brain”—stuffing all protein data into Gemini makes no sense, as Gemini doesn’t need to fold proteins. As discussed earlier regarding information efficiency, that protein data would degrade its language capabilities. Better is having powerful general-purpose tool-using models that can invoke—or even train—specialized tools, while keeping those tools as independent systems.
This perspective deserves deep reflection—it influences what you build today, including factories and financial systems. Take AGI timelines seriously. Envision that world—and build something that remains useful when it arrives.
Join TechFlow official community to stay tuned
Telegram:https://t.me/TechFlowDaily
X (Twitter):https://x.com/TechFlowPost
X (Twitter) EN:https://x.com/BlockFlow_News














