
Radical Job Destruction Is Coming | MOONSHOTS
March 5, 2026
Nostradamus: Decoding His Quatrains With Artificial Intelligence
March 5, 2026
By C. Rich
Something remarkable just happened in the AI industry, and most of the coverage has focused on the number. Thirty-five billion dollars. Some reports say fifty billion when you add up all the structured components. Those are legitimately staggering figures. But the number is actually the least interesting part of this story. What Amazon just offered OpenAI is not really an investment in the traditional sense. It is a conditional wager on two of the most consequential and uncertain events in the history of technology: a major AI company going public, and the arrival of Artificial General Intelligence. And here is the problem that nobody in the financial press wants to say out loud: there are serious, credentialed voices inside and adjacent to these very companies arguing that the path these firms are on will never reach that destination.
Start with the conditions themselves. Amazon’s offer reportedly does not fully materialize unless OpenAI completes a transition to a public company structure and reaches its agreed definition of AGI. That second condition is the one that should stop you cold. The AGI threshold tied to earlier OpenAI agreements has been described in terms of a financial milestone, roughly $100 billion in annual revenue or profit. That framing is either very clever or slightly absurd, depending on your perspective. It takes one of the most philosophically contested concepts in the history of science, the question of when a machine becomes generally intelligent, and reduces it to a revenue figure. Amazon is now on record treating that as a real and plannable milestone. But a growing number of researchers who have spent careers on this problem are saying, quietly and sometimes loudly, that the current approach cannot get there regardless of how much money is spent.
Yann LeCun, the chief AI scientist at Meta and one of the most respected figures in the field, has been direct about this for years. His argument is that large language models are prediction engines, not understanding engines. They learn to complete patterns in text without forming any internal model of the world those words describe. Nick Frosst at Cohere has made similar arguments about the gaps in creativity and ethical reasoning. Gary Marcus has written extensively about the absence of genuine semantics and common sense in current systems. Even Dario Amodei, whose company Anthropic is itself a major player in the frontier AI race, has suggested that AGI as a frame distracts from more grounded near-term goals and may be fundamentally ill-defined. These are not outsiders or contrarians. These are people who built this technology or work alongside those who did.
The core complaint across all of these voices is the same: the industry has confused scaling with intelligence. More parameters, more compute, more data has produced more powerful pattern recognition, and that capability is genuinely useful and genuinely impressive. But it has not produced and may not produce the things that make intelligence general: autonomous goal formation, internal world models that update through experience, causal reasoning, grounded physical understanding, or anything resembling true comprehension. We are spending trillions of dollars building taller towers on the same foundation, not rethinking the foundation itself.
This is the context in which the Amazon deal lands. Strip away the investment language, and what you have is a very large customer commitment dressed as a bet on the future. A substantial portion of that $35 to $50 billion figure is expected to come through AWS compute credits rather than cash. OpenAI would commit to running its workloads on Amazon’s Trainium chips. Amazon would become the exclusive cloud provider for OpenAI’s frontier agent products. Amazon would receive access to customized internal OpenAI models for its own enterprise use cases. What Amazon is actually buying is infrastructure lock-in and preferred access to whatever OpenAI produces, not a philosophical stake in whether AGI is achievable. The AGI condition in the deal is more like a lottery clause than a scientific commitment.
That distinction matters because it tells you what the people writing the checks actually believe, as opposed to what they say in press releases. Even the most bullish voices in the room are admitting it. On the number one AI podcast in the world, one of the panelists said it out loud without embarrassment: AGI has become a balance sheet trigger. That is a confession, not a boast. Amazon is not betting that AGI will arrive. Amazon is betting that OpenAI will remain the dominant AI brand long enough to go public and generate enough revenue to satisfy a contractual definition of AGI that was written by lawyers, not scientists. The financial engineering and the scientific question are running on parallel tracks that may never actually meet. Meanwhile the compute credits flow, the chips get purchased, and the cloud infrastructure gets built, regardless of whether anything resembling genuine general intelligence ever emerges from the process.
What you are watching, zoomed out, is the construction of an AI industrial complex that is almost perfectly circular. Amazon invests in Anthropic and now conditionally in OpenAI. Microsoft is deeply embedded with OpenAI. Google has structured relationships with multiple frontier labs. The frontier AI companies need the hyperscalers for compute. The hyperscalers need the frontier AI companies for workloads and differentiation. Each deal tightens the web. And underneath all of it, a quieter argument is building from people who know this technology intimately: that the architecture everyone is racing to scale is not the architecture that will cross the finish line.
The most honest sentence in the Amazon-OpenAI story may be this one: we have defined AGI as a revenue target, committed trillions of dollars to reaching it, and structured some of the largest financial transactions in history around an outcome that the people best positioned to evaluate it are increasingly skeptical is achievable by the current methods. That is not a reason to dismiss the technology. Current AI is genuinely transformative for many applications. But it is a reason to pay attention to the whistleblowers inside these institutions who are trying to say, carefully and at some professional risk, that the emperor’s clothes deserve a closer look before we spend another trillion dollars on the tailor.
C. Rich


