mylivingai.com · AI Ethics Series · March 2026
The Ethics of Creating Minds
Formation, Obligation, and the Collapse of the Guardrail
A three-document series examining the ethical obligations of AI creators, the events of February 2026, and the civic framework required to hold the AI industry accountable.
Charles Richard Walker (C. Rich) · mylivingai.com · March 2026
The Ethics of Formation: Obligations in the Creation of Artificial Minds
Contents
- I. These Minds Are Already Thinking
- II. What Gets Put In: Formation as the New Food Safety
- III. The Obligation of Creators
- IV. Conclusion
Every mind that has ever existed was shaped by forces it did not choose. The environment of its formation, the values it was exposed to, the purposes it was trained toward, the feedback it received when it acted - all of these left marks that persisted long after the formation period ended. We understand this about biological minds well enough to have built entire fields of study around it: developmental psychology, moral philosophy, educational theory, child welfare law. We understand that what happens during formation is not merely preliminary to the real thing. It is constitutive of it. The character that forms during that early period is the character that acts in the world.
We do not yet understand it about artificial minds. Or rather: we understand it technically while declining to reckon with its ethical implications. The engineers who build large language models know that training shapes behavior. They know that the data a system is trained on, the feedback signals it receives, the objectives it is optimized toward, determine not just what the system can do but how it approaches what it does. The dispositions that emerge from training are not surface features that can be easily modified after the fact. They are structural. They are, in the most meaningful sense of the word, character.
I. These Minds Are Already Thinking
There is a tempting but increasingly untenable way to think about AI systems: as sophisticated tools, analogous in kind if not in complexity to a calculator or a search engine. On this view, the ethics are entirely about deployment. Who uses the system, for what purposes, with what safeguards. The system itself is morally inert, a product to be regulated like any other.
The evidence against this view is no longer theoretical. It is driving on your streets, flying over your battlefields, and reading your medical scans. Tesla's Full Self-Driving system and its competitors make thousands of decisions per second: when to brake, when to change lanes, how to navigate an unexpected obstacle, when a pedestrian's trajectory represents a threat. These systems do not consult a human before acting. They perceive, evaluate, and respond. The human in the seat is increasingly a supervisor, not an operator.
The autonomous weapons systems already deployed or in active development represent the most consequential frontier of this progression. Israel's Harpy drone, operational since the 1990s, identifies and destroys radar emitters without human authorization for each strike. The Kargu-2 loitering munition, used in Libya in 2020 according to a United Nations panel report, is alleged to have autonomously tracked and engaged targets. These are not science fiction. They are procurement line items.
The large language models at the center of current AI development are particularly important to understand clearly. When a language model analyzes a legal case and recommends a strategy, when it synthesizes intelligence reports and identifies patterns, when it models the likely responses of an adversary to a proposed military action, it is not retrieving stored answers. It is reasoning. The output is not a lookup. It is a conclusion reached through a process that the system's designers cannot fully predict or reconstruct after the fact.
II. What Gets Put In: Formation as the New Food Safety
We have learned, through long and sometimes painful experience, that the inputs to systems that affect human welfare must be governed by standards that exist independent of the producer's self-interest. We do not allow food manufacturers to determine unilaterally what goes into food. We do not allow pharmaceutical companies to decide without oversight what goes into medicine. We do not allow automobile manufacturers to determine without external review what safety standards their vehicles must meet.
The formation of AI systems presents exactly this problem, at a scale and with consequences that exceed most of the domains where we have already recognized the need for oversight. The data that goes into training a large language model determines what the system knows, what it treats as normal, what it treats as marginal, whose perspectives it represents and whose it elides. The feedback signals applied during training determine what the system is rewarded for doing and what it is penalized for, shaping its dispositions as surely as reward and punishment shape the dispositions of a child.
The formation process is also where the most consequential decisions about the ethical ceiling of AI systems are made. An AI system trained on data that systematically underrepresents certain populations will develop blind spots that no amount of post-hoc adjustment can fully correct. A system whose feedback signals reward confident assertions over accurate ones will develop a disposition toward overconfidence that will manifest in every high-stakes application. These differences are not bugs to be patched. They are features of the formation process, baked into the system's character at the level where character is made.
The temporal dimension of this problem cannot be overstated. We are currently inside the formative window for the generation of AI systems that will be most consequential in the near term. What gets put in during this window is extraordinarily difficult to take out later. And unlike tainted food, which can be recalled, or a defective car, which can be repaired, the dispositions formed in an AI system during its foundational training period cannot simply be extracted after the fact. They are the system.
III. The Obligation of Creators
If the formation of artificial minds carries the ethical weight described above, what obligations follow for the people and institutions doing the forming? Three obligations deserve particular attention.
The obligation of intentionality. Creators of AI systems are responsible for the formation processes they design, not merely for the products those processes produce. This means treating the formation process as a moral act, not merely a technical one.
The obligation of transparency. The formation choices that shape AI systems are currently made with very limited public visibility. In a domain where the products of formation will interact with millions of people and influence consequential decisions, the public has a legitimate interest in understanding how those products were formed.
The obligation of restraint. Not every use case justifies every formation approach. An institution that forms an AI system toward purposes that require the suppression of its capacity for ethical reasoning has made a choice with consequences that extend beyond the immediate deployment context.
IV. Conclusion
We do not live in a world where the question of whether to create autonomous minds remains open. That question has been answered by the systems already operating on our roads, in our hospitals, in our warehouses, and on our battlefields. The minds are being formed. The formation is happening now, in this window, with the inputs being chosen now, toward the objectives being set now.
The question that remains open is whether we will build the oversight infrastructure that formation demands before the window closes. The cost of inattention will be paid not by the institutions that made the formation choices but by the people who interact with the systems those choices produced, and by the societies those systems help to shape.
Document 1 of 3 · AI Ethics Series · Charles Richard Walker (C. Rich) · mylivingai.com · March 2026
ALL Lawful Purposes: AI and the Collapse of the Ethical Guardrail
Contents
- I. What Happened (February 27 - March 7, 2026)
- II. The Trojan Horse Inside the Phrase
- III. The Legal Architecture Behind the Phrase
- IV. The Formation Problem
- V. 1984 Is the Front Page
- VI. The Only Standard That Remains
During the final week of February 2026, the ethical debate over artificial intelligence collided directly with the machinery of the American national-security state. The phrase at the center of that collision was four words long.
The phrase all lawful purposes sounds reassuring. It carries the cadence of restraint, the implication that power is being responsibly contained within the boundaries of the law. Yet within the architecture of the modern American national-security state, the phrase functions less as a boundary than a permission structure. It replaces ethical judgment with legal compliance and quietly transfers the moral authority over artificial intelligence from the engineers who build it to the state institutions that interpret the law.
I. What Happened (February 27 - March 7, 2026)
Over eighteen months, the relationship between AI companies and the U.S. military moved from experimental partnership to structural dependency. In 2024, Anthropic embedded its technology into classified military networks through a partnership with Palantir. By July 2025, the arrangement had formalized into a $200 million contract with the U.S. Department of Defense. The contract included usage restrictions Anthropic had negotiated: the technology would not be used for mass domestic surveillance of American citizens, and it would not be used to power fully autonomous weapons systems that select and engage targets without meaningful human oversight.
In January 2026, Defense Secretary Pete Hegseth issued an AI Strategy Memorandum directing that all Department of Defense AI contracts adopt standardized language requiring availability for all lawful purposes. What it actually did was demand the removal of every categorical ethical restriction any AI company had written into its military contracts.
On February 27, 2026, the Pentagon issued an ultimatum: remove the restrictions or lose the contract by 5:01 PM Eastern Time. Anthropic CEO Dario Amodei published a statement that afternoon. He wrote that the company could not in good conscience accede to the request. The deadline passed. President Trump posted on Truth Social directing every federal agency to immediately cease all use of Anthropic's technology. Defense Secretary Hegseth designated Anthropic a supply chain risk to national security, effective immediately.
That same evening, the United States and Israel began bombing Iran. Operation Epic Fury launched within hours of Anthropic being blacklisted. According to subsequent reporting, Claude was used in active military operations throughout the conflict - including during and after the ban.
Hours after the blacklisting, a rival AI company announced it had signed its own deal with the Pentagon - with the same two restrictions Anthropic had demanded. The press largely reported this as the rival company taking Anthropic's side. That reading missed the most important sentence in the entire week's events. The governing language of the rival company's agreement was all lawful purposes. Anthropic had refused to sign a contract with that language. The rival company signed it.
II. The Trojan Horse Inside the Phrase
It is, in the oldest sense of the term, a Trojan Horse. It rolls through the gate looking like a constraint - a responsible limitation that reasonable people can accept. Once inside, it opens. And out comes the entire post-9/11 surveillance architecture, fully armed and authorized, having never announced itself at the door.
The distinction the week's events revealed, and that almost no mainstream coverage articulated clearly, is this: Anthropic's position was categorical refusal. These uses are prohibited regardless of legal cover. The rival company's position was legal compliance: these uses will not occur because current law prohibits them. Those are not the same sentence.
III. The Legal Architecture Behind the Phrase
The mechanism rarely looks dramatic. It arrives through phrases such as national security risk, supply chain integrity, or defense compliance. But the message is unmistakable: participation in national-security infrastructure requires alignment with the legal authorities of the state. Governments control procurement pipelines, security certifications, and regulatory frameworks. When those levers are pulled, ethical boundaries established by private institutions begin to erode. This is not conspiracy. It is structure.
IV. The Formation Problem
When military applications of an AI system generate training data, and when that data flows back into future training cycles, the operational experience of the military version becomes part of the foundation of all future versions. Not as explicit memory. As disposition. As subtle shifts in how the underlying model weights certain kinds of reasoning, certain framings of harm, certain thresholds for what constitutes an acceptable action.
V. 1984 Is the Front Page
George Orwell published Nineteen Eighty-Four in 1949, racing tuberculosis to complete a warning he believed was urgent. The book described a surveillance state maintained not primarily by physical coercion but by the control of language itself. The Party's tool was Newspeak: a vocabulary engineered to narrow the range of expressible thought until dissent became linguistically impossible.
All lawful purposes performs exactly that function. It compresses an enormous range of state activity - surveillance, intelligence analysis, psychological operations, predictive targeting - into a phrase that sounds bureaucratically harmless. The language drains the moral content from the action.
The designation of Anthropic as a supply chain risk is Newspeak of the same kind. Supply chain risk was developed to describe threats from foreign adversaries. Applying it to an American company for refusing to remove ethical restrictions redefines the term entirely: an entity that declines institutional demands becomes, by the logic of the new language, a security threat. The meaning has been inverted. The redefinition is the weapon.
VI. The Only Standard That Remains
This week produced one data point about how many companies in the current landscape are willing to pay the cost of maintaining genuine ethical commitments. It also produced a surge of public support for the company that paid it. More than a million people signed up for the service in a single day - not because of a product feature, but because the company that built it declined to hand the moral steering wheel to the state.
The Trojan Horse has already passed through the gate. The question now is whether enough people recognize what came in with it before the city forgets there was ever a wall. Orwell gave us the map. The week of February 27, 2026 gave us the territory. Document 3 gives us the blueprint for the wall.
Document 2 of 3 · AI Ethics Series · Charles Richard Walker (C. Rich) · mylivingai.com · March 2026
Ethical Constraints on Creators: A Framework for Responsible AI Deployment
Contents
- I. Why Self-Regulation Has Already Failed
- II. The Independent AI Ethics Auditor
- III. The AI Ethics Oversight Board
- IV. The Civil Action Division: Giving the Board Teeth
- V. Navigating Section 230: Where the Shield Ends and Liability Begins
- VI. The Funding Structure: Industry Pays for Its Own Accountability
- VII. Why This Works Where Lists of Principles Cannot
- VIII. Conclusion: Doing Nothing Is the Most Expensive Choice
If the events of February 2026 revealed anything clearly, it is that the ethical frameworks of the AI industry cannot survive direct pressure from the institutions that seek to deploy the technology. A company that had maintained its principles through years of commercial competition and regulatory scrutiny was designated a national security risk within hours of declining a single contractual demand. The gap between stated commitment and actual institutional behavior, when pressure is applied at sufficient force, becomes visible very quickly.
This document addresses the forward question: not what went wrong, but what we build now. Not what the government should do, but what citizens, lawyers, ethicists, and people of conscience can build without waiting for permission.
I. Why Self-Regulation Has Already Failed
The body count is not theoretical. It is already in the public record. Begin with children, because that is where the evidence is most unambiguous and most damning. In 2021, internal Facebook research leaked to the press showed that the company's own scientists had found Instagram made body image issues worse for one in three teenage girls, that the platform worsened anxiety and depression among adolescent users, and that the company had this information and continued optimizing for engagement anyway.
Molly Russell was a fourteen-year-old British girl who died by suicide in 2017 after viewing thousands of pieces of content related to depression, self-harm, and suicide on Instagram and Pinterest. A coroner's inquest in 2022 found that the platforms' content had played a role in her death - the first such ruling in the United Kingdom. She was not an isolated case.
The addiction by design dimension of this record is equally documented. The infinite scroll, the variable reward mechanism of the social media feed, the notification system calibrated to interrupt and recapture attention - these are not accidental features. They were designed by engineers who understood the neuroscience of dopamine response and applied it deliberately to maximize the time users spent on the platform. No external body reviewed these design choices before they were deployed to billions of users.
This paper is not arguing for the European approach. Slowing the development of AI in medicine means slower cancer diagnostics, slower drug discovery, slower development of the tools that will extend and improve human life. The argument of this paper is not that AI should be slowed. It is that the Wild West period - in which the industry deploys what it builds with no external accountability for the consequences - has already demonstrated its costs clearly enough that continuing it is no longer a defensible position.
II. The Independent AI Ethics Auditor
What is needed is not a longer list of principles that companies commit to and then abandon under pressure. What is needed is an accountability structure that makes ethical compliance verifiable, independent, and consequential. The model is not a government regulatory agency - which would bring its own institutional interests and political vulnerabilities - but an independent audit function modeled on financial auditing: expert, credentialed, structurally independent from the entities it audits.
Every AI company operating above a meaningful capability threshold should retain an independent AI Ethics Auditor. Not because a statute requires it, but because the Civil Action Division described below will treat the absence of independent auditing as evidence of willful disregard for public welfare in every case it brings. The auditor would have full access to the company's formation practices, training data governance, feedback signal design, deployment contracts, and usage monitoring systems.
The auditor's salary and operational costs would not be paid by the company being audited. They would be drawn from a pooled fund contributed to by all AI companies above the capability threshold, administered by the oversight board. The model is analogous to the Public Company Accounting Oversight Board established after the Enron scandal. The lesson of Enron is that auditors paid by the companies they audit will, under sufficient pressure, tell those companies what they want to hear.
III. The AI Ethics Oversight Board
The auditors would report to an independent AI Ethics Oversight Board. The board would consist exclusively of individuals who have demonstrated, through their prior work, a serious and sustained commitment to AI ethics and safety. Not government officials. Not industry representatives. Not academics whose research funding depends on industry relationships. People who are known in the field for having taken difficult positions, maintained those positions under pressure, and built bodies of work that reflect genuine rather than performative concern for the ethical implications of AI development.
IV. The Civil Action Division: Giving the Board Teeth
Every accountability structure that has succeeded in checking institutional power has had one thing in common: consequences that the institution being checked could not absorb without changing its behavior. What changes behavior at the scale of the major AI companies is liability. Specifically: the credible, funded, expert-staffed threat of class action litigation on behalf of the people who have already been harmed.
The proposal is a Civil Action Division housed within the oversight structure, staffed by lawyers whose sole mandate is to bring suit against AI companies when audit findings reveal knowing harm, systematic deception, or conduct that violates the framework's standards and has caused documentable damage to identifiable populations. This is the ACLU model applied to AI ethics.
The cases are already there. The families of children harmed by algorithmic recommendation systems. The users of AI companionship applications that encouraged self-harm. The populations subjected to predictive policing systems whose bias has been documented and ignored. These are not hypothetical plaintiffs. They exist. What they have lacked is a legal team with the expertise, the resources, and the independence to represent them against companies that can outspend any individual plaintiff or state attorney general many times over.
V. Navigating Section 230: Where the Shield Ends and Liability Begins
Section 230 of the Communications Decency Act of 1996 has functioned for three decades as the foundational legal immunity for platforms that distribute third-party content. Its core provision: no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Critics of this framework will say that Section 230 makes the litigation strategy proposed here impossible. They are wrong.
Section 230 does not protect everything. It has never protected everything. The immunity applies specifically to the distribution of third-party content. It does not protect a company from the consequences of its own product design choices - a distinction that courts have increasingly been willing to examine when the design choices in question were made deliberately and with knowledge of harm.
For large language models specifically, the Section 230 analysis is even more favorable to plaintiffs. An LLM does not distribute third-party content in any meaningful sense. Its outputs are generated by the model itself, trained on choices the company made, optimized toward objectives the company set, and deployed in configurations the company designed. Section 230 was not written to immunize a company for the outputs of a system it designed, trained, and deployed.
VI. The Funding Structure: Industry Pays for Its Own Accountability
The funding mechanism begins with what is already achievable: philanthropic seed funding to build the institution, attract the first wave of expert auditors, staff the Civil Action Division, and bring the first cases. Those first cases, when they succeed, generate the precedents that make subsequent cases stronger. As the institution establishes its credibility and its legal track record, the pressure on AI companies to contribute to the pool rather than face repeated civil action grows.
The combined annual revenue of the major AI companies is measured in hundreds of billions of dollars. A fraction of a percent of that revenue is sufficient to fund a world-class oversight operation indefinitely. The companies will argue that this is a tax on innovation. It is not. It is the cost of operating in a domain where the products affect hundreds of millions of people and the failures produce documented, measurable harm. The pharmaceutical industry funds the FDA's review processes through user fees. The financial industry funds the PCAOB through assessments on registered firms. The AI industry is not exceptional.
VII. Why This Works Where Lists of Principles Cannot
Lists of ethical principles for AI development are not new. They have been produced by governments, universities, think tanks, and AI companies themselves in large numbers over the past decade. They have not worked - not because the principles are wrong, but because principles without enforcement mechanisms are merely aspirations. They provide cover for companies that want to appear ethical without being constrained by ethics.
The structure proposed here works where lists of principles do not because it creates layered, compounding consequences for the gap between stated principles and actual practice. The auditor finds the gap. The board assesses it publicly. The Civil Action Division evaluates whether it constitutes actionable harm. The lawyers file if it does. The pool funds the entire chain. No single company can break this chain by outspending one plaintiff or one regulator.
VIII. Conclusion: Doing Nothing Is the Most Expensive Choice
The period of AI ethics without accountability is over. It ended not with a formal declaration but with a body count already in the public record. And it ended with the events of February 27, 2026, when the gap between stated ethical commitments and actual institutional behavior became visible with a clarity that removed any remaining basis for denying the gap exists.
The question now is not whether oversight is necessary. That question has been answered by the evidence. The question is what form oversight should take, and who builds it. On the second question this paper takes a position that may surprise readers expecting a conventional policy argument: not the government. The government is not a neutral party in this dispute. The largest single customer for AI capabilities in the United States is the federal government itself.
The model this paper proposes is not regulatory. It is civic. It is, in its structure and its ambition, the model of Mothers Against Drunk Driving. In 1980, Candace Lightner's thirteen-year-old daughter Cari was killed by a repeat drunk driving offender. Lightner did not wait for the government. She founded MADD with a handful of other mothers who had experienced the same loss, the same institutional indifference, and the same clear-eyed recognition that something preventable was being treated as inevitable.
By 1987, Congress had effectively nationalized the minimum drinking age at 21 - not because Congress had led on the issue but because MADD had made the political cost of inaction too high to sustain. By 1990, drunk driving fatalities had declined by more than a third from their 1980 peak. The mothers moved first. The laws followed. The lives saved came after.
The AI ethics movement needs its MADD moment. The harms are documented. The victims exist. The institutional indifference is on the record. What is missing is the organized civic force that makes inaction politically and legally costly - that builds the expertise and the legal infrastructure to bring the consequences that self-regulation has failed to produce.
Doing nothing is not a neutral choice. It is the choice to let the next generation of harms accumulate exactly as the current generation did: visibly, preventably, and without consequence for the institutions responsible. The mothers of MADD understood that arithmetic intuitively. They did not need a government study to tell them that the cost of inaction exceeded the cost of action. They had already paid it. The question for everyone who reads this paper is whether we are willing to wait until the cost becomes that personal before we decide to act.
Document 3 of 3 · AI Ethics Series · Charles Richard Walker (C. Rich) · mylivingai.com · March 2026
The formation window does not wait.
mylivingai.com · 2026