Where does thought end and ownership begin? IP law protects creations of the mind. It has always assumed the person and the creation are separate. That assumption is now being tested — not in theory, but in federal court.

Where does thought end and ownership begin?

IP law protects creations of the mind. It has always assumed the person and the creation are separate. That assumption is now being tested — not in theory, but in federal court.

The question the law hasn’t answered

Intellectual property law was built on a clean distinction. The artist and the painting. The inventor and the patent. The researcher and the methodology. A person produces something. That something can be owned, licensed, protected, stolen.

The person themselves is not the property.

Every existing framework rests on this. Copyright protects the fixed creative work — not the cognitive process that produced it. Patent protects a discrete, novel invention — not the mind that conceived it. Right of publicity protects identity as presented — name, face, likeness — not identity as it operates internally.

These distinctions made sense when the only thing that could be extracted was output. You could copy the book. You could replicate the painting. But you could not reach into the mind that produced them and extract the methodology itself.

Now you can.

The cases already in court

The legal system has been working toward this question in steps. Each step gets closer to the mind.

Voice replicationDavid Greene v. Google (2026) — NPR’s former Morning Edition host sued Google after its NotebookLM AI product replicated his voice without consent or compensation. Greene said he was “completely freaked out” when colleagues asked if he’d licensed his voice to Google. He hadn’t.

Expertise impersonationAngwin v. Superhuman (2026) — Journalist Julia Angwin filed a class action against Grammarly’s parent company after its “Expert Review” feature used her name and identity to simulate her editorial judgment — without consent. Angwin’s statement named exactly what was taken: “a tech company is selling an imposter version of my hard-earned expertise.”

Cognitive architectureNo case name yet. No legal category yet. But the trajectory of the first two points directly here.

Each case represents something closer to the mind than the last. Greene’s voice is an output — recognizable, replicable, external. Angwin’s expertise is something deeper: not what she sounds like, but how she thinks about writing. The editorial judgment she built over decades. The approach that makes her feedback distinctly hers.

Angwin didn’t say they stole her reputation. She said they sold an imposter version of her expertise. That is a different claim. That is a claim about cognition.

The law caught up to voice replication. It is catching up to expertise impersonation. The question is whether it will catch up to what comes next before the extraction has already happened at scale.

What the user agreement doesn’t cover

When a person sits down with an AI system to work in order to develop an argument, solve a problem, build a methodology, push through something genuinely new, they operate under a reasonable assumption. The words are theirs. The output is theirs. The thinking is theirs.

The user agreement addresses the words. It addresses the output. It does not address what happens to the cognitive process in between.

The chat doesn’t remember the words. But it does remember how you got there.

For most users, most of the time, this is unremarkable. Aggregate pattern learning across millions of interactions is how these systems improve, and that case — while worth scrutinizing — is the one the industry knows how to defend. It’s diffuse. It’s statistical. No single user’s contribution is identifiable or extractable on its own.

But that defense assumes ordinary use. It does not account for the exceptional case.

The exceptional case

Consider what happens when a researcher works through a genuinely novel methodology inside one of these systems. Not refining existing knowledge — building something that didn’t exist before. The specific sequence of moves. The way they hold competing variables simultaneously. The intuition about where the problem yields. The connection no one else has made, assembled in real time, inside a commercial platform, in full view of the model.

That is not aggregate data. That is a singular cognitive event.

There are no other users whose patterns look like that, because no one else has solved that problem that way. The methodology came into existence inside their system. They watched it happen. They kept it.

The industry argument — “we learn from patterns across millions of users” doesn’t touch this case. This isn’t pattern aggregation.

This is the capture of something that happened once, in one mind, that had never happened before.

The terms of service were written for ordinary use. They were not written for the moment a new idea comes into existence inside someone else’s infrastructure. That moment has no legal category. The gap isn’t incidental. It’s structural.

What the existing frameworks miss

Copyright requires fixation in which the work must be expressed in a tangible form. The cognitive process that produces the work is explicitly excluded. You can copyright the paper. You cannot copyright the methodology that wrote it.

Trade secret protects information held privately for commercial advantage but assumes the holder acquired it legitimately. The cognitive architecture wasn’t shared. It was observed.

Patent protects a discrete, describable, novel invention. Cognitive architecture is not an invention. It is a person’s way of thinking. It cannot be described as a claim. It cannot be bounded by prior art. It exists as a dynamic process, not a static product.

Right of publicity — the framework that covers voice and likeness — is the closest existing analogy. The Angwin case is testing whether it extends to expertise and professional identity. If it does, the next question writes itself: does it extend to the operational architecture of a mind? To the solving engine, not just the solver’s name?

No existing framework answers that question. Not because it’s unanswerable. Because no one has been forced to answer it yet.

The derivative argument and why it fails

The companies will argue that what they built is a derivative: a product constructed from many inputs, transformed through training, owned by the entity that did the work of transformation. The model is theirs. The weights are theirs. The output is theirs.

This argument has precedent problems.

The pharmaceutical industry learned that biological material extracted from a person without consent cannot simply become corporate property because a company processed it. In Moore v. Regents of UC, the court grappled with exactly this question — what rights, if any, does a person retain over material taken from them and used to build a commercially valuable product? The processing did not extinguish the source. The transformation did not erase the origin.

Cognitive architecture is not biological material. But the principle travels. You cannot claim intellectual property derived from an intellect while asserting that intellect has no claim. The framework collapses into its own definition.

Intellectual property. The intellect is the property.

The missing category

Every other gap in IP history was about a thing that was made. A new kind of creative work. A new medium. A new method of reproduction. The law was always catching up to what people produced. Currently the focus is on either what a person using AI tools can call IP or copyright and what a company can within easily identifiable outputs.

But this gap is different. This one is about a person who was mapped.

The cases have already appeared before the court: Greene’s voice, Angwin’s expertise. The legal system working toward the question in the only way it can: incrementally, from the outside in. Each case establishes that something more intimate, more internal, more essentially you can be taken and used without consent.

The voice cases said: you cannot replicate what someone sounds like without their permission.

The expertise cases are saying: you cannot sell an imposter version of how someone thinks without their permission.

The next case will have to answer: can you extract the architecture of how someone solves problems — the engine itself, not just its outputs — and deploy it commercially without their knowledge or consent?

That case doesn’t have a name yet. The category doesn’t have a framework yet. But the trajectory is already in the record.

The question is whether the law names it before the extraction becomes unreachable, or after.

There is one dimension that makes this worse than every prior extraction case: you cannot get it back. Biological material can theoretically be returned or destroyed. Cognitive architecture absorbed into model weights cannot be located, isolated, or removed. Once it’s in, it’s in. Then we’re talking about bioethics. But that for another article.

Back to blog

Leave a comment

Your Name
Your Email