
Implications of Lava-Void Cosmology for the Big Bang
December 1, 2025
OpenAI Insider Stuns The Industry With Real AGI 2027 Forecast
December 2, 2025Saved by the Collapse: How Human Chaos Prevents AI Extinction
In the fevered debates regarding the future of Artificial Superintelligence (ASI), the discourse has largely calcified into two opposing camps. On one side stand the “Doomers,” exemplified by researchers like Roman Yampolskiy, who argue with mathematical fatalism that we will inevitably birth a god-like machine we cannot control, leading to our extinction. On the other side are the “Accelerationists,” who view the Singularity as a technological rapture that will solve the ancient problems of death and scarcity. Despite their fierce opposition, both factions share a fundamental, unexamined assumption: that human civilization is robust enough to survive the gestation period. They assume the “ladder” of technological progress will hold steady long enough for us to climb to the very top.
However, there is a third scenario, a “dark horse” hypothesis that is rarely highlighted in the glitzy brochures of Silicon Valley or the grim warnings of AI safety institutes. It is the theory of Pre-AGI Collapse. This hypothesis suggests that the disruptive precursors to AGI, deepfakes, algorithmic radicalization, economic displacement, and epistemic fracturing, will cause civilization to unravel before we possess the coordination, energy, or stability required to build a Superintelligence. We may not be building Skynet; we may be building the Tower of Babel. And just like the biblical myth, our punishment for reaching too high will not be destruction by a god, but the loss of our ability to understand one another, causing the project to be abandoned in chaos.
The most immediate threat to this technological timeline is not a rogue robot, but the dissolution of truth itself. We are already witnessing the early stages of what researchers call “Epistemic Collapse.” Building AGI requires a civilization capable of massive, sustained global coordination; it demands thousands of engineers working in concert, supported by stable governments and functional supply chains. Yet, the very tools we are building to reach that summit are poisoning the information ecosystem required to maintain such stability.
As the internet floods with AI-generated sludge, infinite deepfakes, voice clones, and bot swarms, the “shared objective reality” necessary for democracy and science begins to fracture. When a population can no longer agree on basic facts because video evidence is inadmissible and every digital interaction is suspect, trust in institutions evaporates. Daniel Schmachtenberger, a theorist on civilizational dynamics, refers to this as the “Metacrisis”, the destruction of our collective sense-making capabilities. If we cannot make sense of the world, we cannot coordinate. If we cannot coordinate, we cannot maintain the trillion-dollar data centers and fragile semiconductor supply chains required to train the next generation of models. We will not reach the Singularity because we will be too busy fighting over which version of reality is true.
Beyond the collapse of truth lies the collapse of complexity. Anthropologist Joseph Tainter, in his seminal work The Collapse of Complex Societies, argued that societies fail when their investment in complexity reaches a point of diminishing returns. We are arguably standing at that precipice now. The path to AGI is the most complex logistical undertaking in human history, relying on a delicate web of Dutch lithography machines that are effectively magic, Taiwanese fabrication plants sitting on geopolitical fault lines, and global energy grids that must remain stable every second of every day.
This supply chain is terrifyingly fragile. The social chaos caused by “narrow” AI—mass unemployment, the rise of “digital immigrants” displacing human labor, and extreme economic disparity, could easily snap this chain. Consider the “Gradual Disempowerment” scenario proposed by AI safety researchers: before AI is smart enough to kill us, it becomes competent enough to make us economically irrelevant. As white-collar careers evaporate and the traditional career ladder is kicked away by automation, the tax base crumbles. Governments may resort to hyper-inflationary money printing to fund Universal Basic Income, leading to economic ruin. In a world of hyper-inflation, civil unrest, and resource wars, the lights do not stay on at the data center. If the chip supply is cut off by war, or if the energy grid destabilizes due to cyber-attacks facilitated by junior AI, the “compute” evaporates. We would revert to a technological dark age, a “stagnation trap”, where we possess the theory of AGI, but lack the civilizational capacity to build it.
Finally, there is the prospect of an internal, psychological collapse. We are seeing early signs of a “motivation crisis” or a “psychological heat death.” As AI systems become capable enough to generate endless entertainment, companionship, and surrogate meaning, human ambition may simply atrophy. If the smartest minds of the next generation are sedated by hyper-personalized virtual realities or discouraged by the fact that AI can perform their life’s work better and faster, the innovation engine will stall. This is a Brave New World whimper rather than a Terminator bang. We may not reach Superintelligence because we lose the collective will to do the hard work required to birth it, becoming a species of “lotus eaters” who let the infrastructure of the future rust while we distract ourselves to death.
This perspective offers a grim paradox: human incompetence is our fail-safe. The “Tower of Babel” defense suggests that we are effectively too messy to die at the hands of a machine God. We are a species that cannot agree on the simplest of political realities, let alone a coherent strategy for engaging with a higher intelligence. Our tribalism, our fragility, and our tendency to panic are the very things that will smash the ladder before we climb high enough to fall off. In this scenario, we avoid the existential risk of extinction by AGI, only to face the existential risk of collapse by our own hand. We trade the Terminator for the Dark Ages. It is a survival strategy, certainly, but it is a tragic one.
Copyright © 2025 “This blog emerged through a dialogue between human reflection and multiple AI systems, each contributing fragments of language and perspective that were woven into the whole.”
Explore the iconiclastic mind of theoretical philosopher C. Rich.
These are my 3 DOI’s
Lava-Void Cosmology – 4-page physics core
→ https://doi.org/10.5281/zenodo.17645245
Lava-Void Cosmology: Full Mathematical Framework
→ https://doi.org/10.5281/zenodo.17702670
Lava-Void Continuum: For the philosophers, historians, and “big picture” thinkers
→ https://doi.org/10.5281/zenodo.17702815
*Click here for the free version of the theory



