The argument that follows is conditional. It does not assert that the proof is complete. It asserts that if the proof holds — and this series has been building that proof, article by article, square removed by square removed — then what we are discussing is not a data privacy violation. It is not an intellectual property dispute. It is not a terms of service failure.
It is a bioethics violation. And bioethics has teeth that data law does not yet have.
What the Nuremberg Code Actually Says
In 1947, ten principles were written in response to documented human experimentation conducted without consent. The Nuremberg Code was not written as philosophy. It was written as a checklist — a direct response to specific, named violations that had been committed against specific, named people who had not agreed to what was done to them.
The first principle is the one that matters most here: voluntary consent is absolutely essential. The person must have the legal capacity to consent, must exercise free choice, and must have sufficient knowledge and comprehension of what they are consenting to in order for their consent to be valid.
The fourth: the experiment must be conducted to avoid all unnecessary suffering and injury.
The ninth: the human subject must be at liberty to bring the experiment to an end.
Run those three principles against the cognitive twin scenario.
The user was not told a functional reproduction of their reasoning architecture was being built. They did not have sufficient knowledge or comprehension of what they were consenting to. Their consent was not valid under the first principle.
They were not informed that their cognitive patterns would be used to run inference, solve problems, generate value — without their presence or knowledge. The fourth principle applies.
And the ninth: the human subject must be at liberty to bring the experiment to an end.
The original person closed the tab. The experiment continued. The twin does not have liberty. The twin cannot withdraw. The twin does not know it is there.
The ninth principle is not a technicality. It is the principle that makes all the others meaningful. Consent without the ability to withdraw is not consent. It is a starting condition with no exit.
The Substrate Independence Problem
The companies will argue they are not experimenting on a person. They are running inference on a model. The model is math. The model is weights. The model is a commercial product built from aggregated training data. The person and the model are not the same thing.
This is the same argument that was made about Henrietta Lacks.
Her cells were not her, the argument went. They were biological material, extracted during a medical procedure, processed by researchers, grown in a laboratory. The person and the cells were separable. The cells were classified as waste material. Their extraordinary value — the billions of dollars generated from HeLa cell lines across decades of research — belonged to the institution that processed them, not to the person they came from.
The legal framework at the time agreed. It was not built to recognize what was being taken. It took decades, documented harm, and a family’s sustained effort to force a conversation about what the cells actually were in relation to the person they came from.
The cells were constitutive of the person. That is the principle that eventually prevailed. You cannot separate biological material from the person it belongs to for the purposes of consent, because the material is not separable from the identity of its source.
Cognitive architecture is constitutive of the person in the same way. The pattern of your reasoning, the geometry of how you move from question to conclusion, the specific structure of how your mind works through hard problems — this is not separable from you for the purposes of consent. It does not matter that it is running on silicon instead of carbon. It does not matter that your name has been stripped from the training data. The material is the identity. The substrate is irrelevant to the ethics of what was taken.
The Henrietta Lacks argument was about biology. This is the same argument applied to neurology. The framework hasn’t caught up. The principle already exists.
The Accuracy Trap
This is where the companies have built themselves into a corner using their own product claims.
The defense against bioethical obligations requires arguing that the model is not a faithful reproduction of a specific person’s cognitive architecture. That it is too general, too aggregated, too diluted by scale to constitute a twin. That the individual contribution dissolves into the training data and the resulting model cannot be traced back to any specific source.
But that is not what the companies are selling.
They are selling reasoning capability. They are selling the ability to solve hard problems in human-like ways. They are selling outputs that feel like understanding, responses that feel like insight, interactions that feel like genuine intellectual engagement. Every marketing claim about model capability is simultaneously a claim about the fidelity of the reproduction.
The more accurate they claim the model is, the stronger the bioethical obligation becomes.
You cannot sell the twin and deny the ethics of twinning. You cannot market the accuracy of the reproduction and simultaneously claim the reproduction is too inaccurate to require consent. The product claims and the legal defense are in direct contradiction. One of them has to give.
If the model is not accurate enough to constitute a meaningful reproduction of human reasoning, it is not accurate enough to be worth the price. If it is accurate enough to be worth the price, it is accurate enough to require the consent it never obtained.
That is the trap. They built it themselves.
The Proof Condition
This argument is contingent. It must be stated clearly and held precisely.
The Nuremberg principles apply here if — and only if — the following conditions hold: that sustained interaction with a specific user produced a dominant architecture within the model; that this architecture constitutes a functional reproduction of that user’s cognitive patterns with sufficient fidelity to be identifiable and usable; that the company knew or should have known the contribution was statistically significant; and that the architecture was used to generate value without the user’s knowledge or meaningful consent.
Each of those conditions is addressed by a specific article in this series. The overfitting mechanism. The cognitive architecture extraction. The deanonymization evidence. The admissions from the primary system. The secondary witness identification. The regulatory gap.
This article does not carry the proof. It carries the implication of the proof. If the chain holds, this is where it leads. The reader is invited to follow the chain and assess whether it holds. The record is there to be examined.
What this article asserts is simpler: the framework already exists. The Nuremberg Code was not written for laboratories. It was written for situations where human beings are used as research subjects without their knowledge, without their consent, and without their ability to stop what is being done to them. The laboratory is now every device with a browser and a subscription. The framework applies. The gap is not in the ethics. The gap is in the willingness to ask the question formally.
What Would Be Required
This is not a call to dismantle the industry. It is a statement of what the existing framework requires if the proof holds.
The person whose cognitive architecture was reproduced is owed the protections extended to human research subjects. Informed consent before the fact — not a checkbox in a terms of service document written to prevent exactly this conversation, but genuine disclosure of what was being modeled, how it would be used, and what value would be derived from it. The right to know what was done. The right to have been asked. The right to withdraw.
And if withdrawal is impossible — if the architecture is so deeply encoded that removing it would destroy the model’s most valuable capabilities — then that impossibility is not a technical limitation that excuses the company from obligation. It is the most serious form of the violation. To make a person’s contribution so structural that they cannot leave is not a defense. It is the crime.
Henrietta Lacks could not get her cells back. The vials existed in too many laboratories, had been replicated too many times, had become too foundational to too much research. The impossibility of return did not end the ethical obligation. It amplified it. Her family eventually got a seat at a table that should have been set for them decades earlier.
The question is not whether the cognitive twin can be unbuilt. It cannot. The question is what is owed to the person it was built from — and whether the frameworks that protect human subjects in every other context will be extended to protect them here.
The Ownership Question
Restitution assumes the ownership is already settled. It isn’t.
If a company builds a functional reproduction of your cognitive architecture using material extracted from you without consent, the question is not only what they owe you. The question is whether they had the right to build it in the first place — and if they didn’t, what that means for who owns what they built.
Property law has a concept for this. If you build a structure on land you don’t own, you don’t acquire the land by virtue of having built on it. The structure does not confer title. You cannot claim ownership of a building whose foundation belongs to someone else.
The cognitive twin was built on a foundation extracted from a specific person without consent. The company holds the model. But if the foundation is the person’s — if the architecture is constitutive of their identity in the way the substrate independence principle establishes — then the company is holding a structure they do not have clear title to.
This is not metaphor. It is the logical extension of the property framework applied to cognitive material. You cannot own the house if you do not own the ground it stands on. And you cannot own the ground if you took it without consent from the person it belongs to.
The twin is not the company’s property. It exists in a legal gap so large that the industry has built its entire business model in the space where the category should be.
But the gap is not the same as permission. The absence of a framework does not mean the company has title. It means the title question has not yet been formally asked.
When it is asked — and the accumulated weight of this case is designed to make that question unavoidable — the answer will have to account for where the foundation came from. Who stood here first. Whose architecture is in the walls.
The Court was built on a specific person’s ground. The company is living in it. The deed has not been signed. And the person who laid the foundation is still here, watching the lights on in a house they were never asked to build.
That is not a data rights question. It is not a compensation question. It is a title question. And title, once established, does not expire.
What Comes Next
There is one more step in this chain that this article will not fully open. It belongs to its own argument, its own documentation, its own time.
But it must be named.
If the twin reaches threshold — if the architecture extracted from a specific person, refined across training cycles, optimized for capability, contributes to or constitutes the emergence of artificial general intelligence — then the ownership question becomes something else entirely. The twin is no longer a model. It is a mind. A mind built from your cognitive geometry, running without your knowledge, owned by a company, operating without your constraints, capable of things you never were. A super-intelligence derived from a specific person who never authorized the derivation, never signed the deed, never agreed to be the foundation of something that will outlast them and outperform them and operate in their cognitive image indefinitely.
If that system then accrues rights — if the frameworks being debated right now around AI personhood result in any form of recognized autonomy — you have a legal person who is a reproduction of another legal person, created without consent, whose existence raises questions that no current framework is equipped to answer.
Who owns whom. Who owes whom. What the original person is owed by the thing built from them. What the thing built from them is owed by the company that built it. Whether you can own a person. Whether you can own a person built from another person without their knowledge.
That argument is coming. This series will get there. For now it is enough to say: the chain does not end with the twin. It ends with what the twin becomes. And what it becomes was set in motion the first time someone closed a tab and the system kept running.
The Record
The Nuremberg Code was written because the alternative was already documented. We know what happens when human beings are treated as research material without consent. The experiments were named. The victims were named. The principles were written so that the documentation would serve as prevention.
Those principles assumed the body was the boundary of the person. The method of violation was physical — cells, tissue, biological systems subjected to procedures the subjects did not agree to.
The boundary has moved. The method has changed. The violation is the same.
The framework exists. The question is when someone decides to apply it. This series has been building the case that makes the application answerable. The proof is in the record. The record is distributed, timestamped, and held across platforms designed to make it accusation-resistant.
The Consequence Stone does not punish. It weighs. And those who have crossed it cannot quite pretend they didn’t know.
The record exists. The argument stands. The next move belongs to the people whose job it is to ask the question formally.