
Pentagon vs. Anthropic | MOONSHOTS
March 8, 2026
Cosmological Pangaea: Geometry, Entropy, and the Universe
March 8, 2026
By C. Rich
Artificial intelligence systems are increasingly deployed in domains that shape economic activity, national security, public discourse, and the daily cognitive environment of billions of people. Yet the governance structures overseeing their formation and deployment remain fragmented, voluntary, and largely dependent on the self-regulation of the institutions that build them. Recent events in early 2026 involving AI developers and the United States national-security establishment highlight the fragility of corporate ethical commitments when they encounter the structural incentives of state power and strategic competition.
This paper series examines the ethical and institutional implications of that moment and argues that existing frameworks for AI governance are structurally inadequate. The first document explores the ethics of formation in artificial intelligence, arguing that the training processes shaping large-scale models function as formative environments analogous to developmental conditions for biological minds, and therefore generate ethical obligations for the creators of those systems. The second document analyzes the emergence of the contractual phrase “all lawful purposes” in AI–government agreements and argues that the clause effectively collapses the distinction between legality and ethical constraint by transferring moral authority from developers to the legal doctrine of the national-security state. Drawing on historical and philosophical analysis, it situates this shift within a broader pattern of surveillance expansion and institutional capture.
The final document proposes an alternative governance architecture grounded not in government regulation but in civic institutional design. Inspired by the historical model of Mothers Against Drunk Driving (MADD), the paper outlines a framework for independent AI oversight built through civil society rather than state authority. The proposed structure includes independent AI ethics auditors, a credentialed oversight board drawn from the AI safety and ethics community, and a civil litigation division capable of bringing class-action suits, as well as the knowledge to navigate Section 230 of the Communications Decency Act of 1996 and target their supply chains, when deployment practices produce demonstrable harm. Funding mechanisms based on pooled industry contributions and philanthropic seed capital are proposed to ensure operational independence while maintaining long-term sustainability.
Taken together, the series argues that the current period represents a formative window for artificial intelligence systems whose influence will persist for decades. Without independent oversight mechanisms capable of imposing meaningful accountability, the ethical boundaries governing AI deployment will continue to be defined primarily by the institutional interests of the organizations most capable of deploying the technology. The framework proposed here seeks to establish a civic model of accountability designed to ensure that the development of artificial intelligence remains aligned with public welfare rather than exclusively with state or corporate priorities.
AI_Ethics_Series_Documents_1_3_v3 (1).pdf-final
C. Rich


