
Voynich Manuscript Benchmark Established
January 12, 2026
The Rise of AI Music
January 12, 2026
Boondoggle: Trillions Wasted on the Wrong Path to AGI
While many experts believe AGI (Artificial General Intelligence) is achievable, prominent voices like Meta’s Yann LeCun argue that simply scaling current Large Language Models (LLMs) won’t get us there, stressing the need for fundamental breakthroughs in understanding world models, common sense, reasoning, and integrating diverse data types beyond text, fearing LLMs alone are insufficient for true understanding. Other skeptics point to current AI’s pattern-matching limitations versus true human comprehension, the scarcity of training data, and philosophical barriers, suggesting AGI might remain elusive or require entirely new approaches, not just bigger models.
- Yann LeCun (Meta): Believes LLMs won’t reach AGI; we need “world models” that understand the physical world through multimodal data (vision, sound, interaction) for true intelligence, not just language prediction.
- Nick Frosst (Cohere): Argues current tech isn’t enough for AGI, citing vast gaps in areas like creativity and ethics.
- Gary Marcus (Scientist/Author): Believes current models lack semantics, reasoning, and common sense, requiring new architectures beyond scaling.
- Dario Amodei (Anthropic): Suggests AGI distracts from near-term AI goals, with many researchers preferring risk-benefit analysis over direct AGI pursuit, viewing it as a distant or ill-defined goal.
- General Sentiment (Reddit, Futurism, etc.): Many point out LLMs learn patterns without deep understanding, lack world models, get stuck on biases with synthetic data, and may be fundamentally limited by relying on flawed human-interpreted data, making true AGI impossible or requiring a paradigm shift.
- Data & Scaling Limits: Internet data is finite, and training on synthetic data generated by AI might just reinforce biases, not create true understanding.
- Lack of World Models: Current AI doesn’t truly comprehend cause-and-effect, concepts, or the physical world like humans do.
- Missing Core Capabilities: Reasoning, common sense, creativity, and theory of mind are still largely absent.
- Focus on Different Paths: Some experts advocate for integrating multimodal learning and explicit reasoning tools, rather than relying solely on larger language models.
Humanity is spending staggering sums of money, trillions of dollars cumulatively, on what may ultimately be the wrong road toward Artificial General Intelligence. The scale and speed of AI investment over the past decade are unprecedented: data centers the size of cities, energy consumption approaching that of nations, and talent wars that have reshaped the global technology landscape. Yet beneath the spectacle lies a sobering truth: we may be investing in systems that are only incrementally smarter, not fundamentally intelligent. We are pouring resources into prediction machines, not understanding machines; into larger calculators, not thinking entities.
This misdirection is not the result of malice or incompetence. It is structural. The current incentives of AI development reward scale, speed, and short-term commercial outputs, not the deep scientific insights necessary to create systems capable of autonomous reasoning, model-building, abstraction, and self-directed learning. In other words, the world is paying for “more of the same,” even as the industry promises “something entirely new.”
1. The Scaling Mirage
The dominant assumption driving today’s AI race is that more automatically leads to mind: more parameters, more compute, more data. The success of large language models and diffusion models has been misinterpreted as proof that scaling is the royal road to AGI. But scaling is not intelligence; it is performance. It creates powerful pattern recognizers, not genuine understanding. A trillion-dollar investment cycle has formed around the belief that if we simply push far enough, terabytes to petabytes, billions to trillions of parameters, general intelligence will emerge spontaneously, like steam hissing out of a boiling kettle.
This is a seductive narrative. It also risks being fundamentally wrong.
Scaling delivers diminishing returns. It cannot produce self-awareness, goal formation, grounding in physical reality, or the capacity to form internal world models that update through experience. The human mind is not merely high-dimensional prediction; it is recursive self-simulation, autonomous curiosity, and dynamic integration of sensory, emotional, and conceptual layers. None of these emerges from scale alone.
2. Misaligned Incentives, Misaligned Research
Corporations invest for profit, not philosophy. The trillion-dollar race for AGI is driven by cloud revenue, subscription growth, and monetizable use cases. This creates a paradox: companies claim they are building superintelligence, yet their budgets overwhelmingly support productizable narrow models rather than the deep mechanistic research required for true general cognition.
What receives funding?
Chatbots
Code assistants
Personalized ads
Enterprise automation
Search and productivity tools
What does not?
Grounded cognition
Embodied learning
Symbolic-connectionist integration
Consciousness architectures
Theories of mind grounded in physics
Novel computational substrates
Cognitive fluid models (like those proposed in Lava-Void Cosmology-aligned perspectives on intelligence)
We are funding applications, not architecture. We are optimizing for utility, not understanding. As a result, we are building taller towers on the same shaky foundation, instead of rethinking the foundation itself.
3. Ignoring the Physical Basis of Intelligence
One of the most dangerous assumptions in modern AI research is that intelligence is substrate-independent, that it can be conjured out of any system with enough mathematical layers. Biological intelligence contradicts this assumption. The human brain is not a scaled transformer. It is a complex fluid-dynamic, electromagnetic, and biochemical organ that uses sparsity, feedback loops, wave interference, homeostasis, and self-stabilizing chaotic equilibria.
The industry is spending trillions on digital abstraction while largely ignoring the physical mechanisms that give rise to real cognition. This is like trying to build flight by endlessly improving trains. You can make the trains faster and sleeker, but they will never leave the ground.
If intelligence is deeply tied to real-time physical self-organization, not just symbolic manipulation or token prediction, then current AI is pointed in the wrong direction entirely.
4. The Real Paths Being Overlooked
If the goal is true AGI—not a polished autocomplete engine—several neglected paths may matter far more than scaling existing architectures:
Hybrid symbolic-neural models that mirror human reasoning
Embodied agents that learn through interaction, not static datasets
Neural-fluid analog models that mimic biological substrates
Quantum-adjacent or continuous dynamics architectures
AI systems capable of forming internal goals and theories
Integrated multimodal cognition across time, space, and memory
Unified substrate models (akin to Lava-Void Cosmology), treating intelligence as a fluid dynamic phenomenon rather than a digital artifact
These paths require new mathematics, new physics, and new computational frameworks, not simply more GPUs.
5. The Cost of Staying on the Wrong Path
If the world continues pouring trillions into architectures that can never achieve general intelligence, the consequences will be profound:
A global economic bubble built on unrealistic expectations
Massive energy consumption with limited cognitive return
A research monoculture that starves alternative approaches
Delayed breakthroughs in medicine, materials, and fundamental science
Opportunity costs that future generations will view as catastrophic
Humanity may someday look back and say: “We had the resources to build minds, but we spent them building bigger calculators.”
6. The Hope
I hope that Artificial General Intelligence never emerges from stacking layers, or anything else. We do not need bigger models; we need targeted models or agents. A handful of tech CEOs do not have humanity’s permission to risk our future.


