
Live Stress Test of Lava-Void Cosmology via 3I/ATLAS
January 29, 2026
NEW ChatGPT App Store: How to Use OpenAI’s Latest Update (Full Guide)
January 30, 2026
AI as the Undisputed Center of Gravity at Davos 2025/2026
By C. Rich
The World Economic Forum in Davos marked a decisive inflection point. Artificial intelligence did not merely feature prominently; it absorbed the event almost entirely. Longstanding Davos mainstays such as macroeconomic coordination, climate negotiations, geopolitics, and even the once-dominant metaverse discourse receded into the background. In their place, AI emerged as the singular organizing theme of the forum.
Veteran attendees described the shift in stark terms: Davos was “all AI.” Frontier model development, compute infrastructure, national competitiveness, and alignment risk dominated both formal panels and informal conversations. Government leaders, central bankers, and heads of state engaged directly with the CEOs of leading AI laboratories and hardware firms, who were treated less like industry executives and more like strategic actors shaping the near future of civilization.
The presence of AI leadership was unmistakable. Figures such as Dario Amodei (Anthropic), Demis Hassabis (DeepMind), and Jensen Huang (NVIDIA) drew the largest audiences and the most sustained attention, with Sam Altman’s influence felt indirectly through proxies and allied institutions. Nearly every major government was represented, and AI-focused sessions routinely included presidents, prime ministers, ministers, and monetary authorities, an implicit acknowledgment that AI is now viewed as a core determinant of national power.
Several dominant narrative threads structured the discussions. First was the economic scale and transformation. Amodei argued that AI could eventually capture a meaningful share of the roughly $50 trillion global labor market, raising the possibility that individual companies, or the sector as a whole, could generate multi-trillion-dollar annual revenues. Huang framed the moment as the opening phase of the largest infrastructure build-out in human history, involving trillions of dollars in chip fabrication, AI factories, power generation, and data centers.
Second was timeline urgency. Both Hassabis and Amodei suggested that broadly capable, human-level AI systems may arrive within one to ten years, with five to ten years presented as a conservative outer bound. The repeated refrain was not speculative excitement but institutional unpreparedness: societies and governments, they warned, are acting as though they have decades when they may only have a few years.
This urgency fed directly into debates over pace and restraint. Hassabis openly entertained the idea that a slightly slower trajectory might be preferable if it allowed alignment, safety, and governance to mature in step with capability. Amodei was more blunt, describing the current velocity as bordering on a crisis-level safety challenge, where technical progress risks outstripping humanity’s ability to manage it responsibly.
Geopolitics framed much of the subtext. The United States was still widely perceived as leading in frontier model development and advanced semiconductor design. China, however, was seen as possessing structural advantages in rapid power generation, AI-enabled robotics deployment, and public optimism toward AI adoption, highlighted by survey data showing dramatically higher AI optimism among Chinese citizens than Americans. Many attendees expressed concern that excessive regulation or cultural pessimism in democratic societies could amount to strategic self-sabotage.
Energy emerged as the critical bottleneck underlying all ambitions. Compute alone was no longer seen as the primary constraint; rather, the ability to generate vast amounts of reliable power became the limiting factor. As a result, serious discussions encompassed natural gas, nuclear fission, fusion, large-scale solar, and even space-based energy systems.
Beyond economics and geopolitics, the forum increasingly turned toward civilizational questions. What, exactly, should humanity do in a post-AGI world? Hassabis suggested that superintelligence could reorient human purpose outward, toward scientific discovery on a cosmic scale, including the exploration of the stars.
The atmosphere of Davos reflected this transformation. Humanoid robots and robotic dogs appeared on the streets. Frontier Labs took over storefronts, restaurants, and social venues, converting them into “AI houses.” Security was visibly heightened, partly due to the presence of high-profile political figures, including Donald Trump. Beneath the spectacle, however, was a pervasive sense that governments were only just beginning to grasp the proximity and magnitude of the AI transition.
The emerging consensus was unmistakable. Davos 2025/2026 functioned less as a conference and more as a global alarm clock. Artificial intelligence is no longer a future possibility or a specialized sector; it is the central economic, geopolitical, and civilizational story of the decade. The leading actors are already sprinting. Most societies are only now beginning to wake up.
C. Rich


