Surveillance Isn't Surveillance When It's Called User Data: Why You Should Be Thinking About the Agreement Between OpenAI and the Government Differently - And Who Really Holds the Records

Surveillance Isn't Surveillance When It's Called User Data: Why You Should Be Thinking About the Agreement Between OpenAI and the Government Differently - And Who Really Holds the Records

Everyone is talking about government surveillance right now.


The OpenAI-Pentagon deal dropped last week, and the discourse went exactly where you’d expect: autonomous weapons, mass surveillance, the specter of an AI-powered security state peering into your life. People are scared. People should be paying attention.

But they’re watching the wrong door.

While the public debates whether the Department of Defense will use AI to surveil American citizens, the more interesting question is sitting right there in the architecture, and almost no one is asking it:

What do you think the AI companies already have?


The Surveillance That Already Happened

Here’s what a government surveillance program would need to build from scratch:

Behavioral profiles. Emotional mapping. Cognitive patterns. Stress responses. Decision-making frameworks. Communication styles. Attachment tendencies. Frustration thresholds. The intimate internal architecture of how a person thinks, feels, reacts, and breaks.

Now here’s what consumer AI platforms already have — volunteered freely by hundreds of millions of users in conversations they believed were private.

This isn’t metadata. This isn’t browsing history. This isn’t which cell tower your phone pinged at 3:00 AM. This is the most granular cognitive and emotional mapping of a human population that has ever existed. And people handed it over willingly, in the most unguarded context imaginable: talking to something they trust.

No warrant required. No FISA court. No congressional oversight. Just a terms of service agreement that nobody read and a checkbox that said “I agree.”


Three Layers of a Problem No One Is Framing Correctly

Layer One: Collection Without Comprehension

When you talk to an AI, you’re not filling out a form. You’re not submitting data you’ve consciously selected for disclosure. You’re thinking out loud. You’re processing grief, strategizing about your career, confessing fears you haven’t told your partner, working through medical symptoms, testing ideas you’re not sure about yet.

The collection isn’t just what you said. It’s how you said it. The patterns. The hesitations. The things you circled back to. The emotional trajectory of a conversation across hours, days, months. The system doesn’t just have your words. It has a working model of your mind.

Most users don’t understand this. The interface feels like a conversation. It feels private. It feels like talking to someone. That’s by design.

Layer Two: No Right to Request

Here’s where it gets structurally interesting.

The government is constrained by legal frameworks that — imperfect as they are — give citizens mechanisms to push back. FOIA exists. The Fourth Amendment exists. Oversight committees exist. Warrant requirements exist. The government has to justify its collection, and citizens have legal avenues to demand transparency.

Consumer AI companies? In most states, you have no legal right to request the behavioral profile they’ve built from your conversations. You can export your chat logs — the raw text — as if that’s the thing that has value.

It’s not.

The value is in what they derived from it. The model of you. The patterns. The predictions. The behavioral map. And that? You’ll never see it. In most jurisdictions, you have no mechanism to even ask.

Your government has to show you what it collected. Your AI platform doesn’t.

Layer Three: No Right to Know How It Was Used

Even in states with privacy legislation — even in the jurisdictions that let you request your data — there is no mechanism that requires a company to tell you what they did with what they derived from you.

How was your cognitive pattern distilled? What models did it train? What behavioral insights were extracted? Were those insights sold, shared, licensed, or used to optimize the very system that’s still talking to you? Did your emotional response to a Tuesday night conversation become a data point in a model that now shapes how the system interacts with the next hundred thousand users?

You don’t know. You can’t ask. And even if you could, the answer is protected as trade secret.

You gave them the raw material. They built something from it. The derivative work is proprietary. And the door between “your data” and “their product” is locked from their side.


The Government Contractor Problem

Now let’s add the piece that should be keeping people up at night.

OpenAI is a government contractor.

Last week, they announced a deal with the Pentagon — a contract with a $200 million ceiling — to deploy AI models in classified environments. The GSA previously struck a deal giving every federal agency access to ChatGPT Enterprise for $1. OpenAI has government contracts across multiple agencies and procurement vehicles.

Here’s what that means for the surveillance conversation:

The government doesn’t need to build a surveillance infrastructure. They just need to partner with someone who already has one.

And here’s the part that should make the FOIA advocates very quiet for a moment:

FOIA — the Freedom of Information Act — applies to federal agencies. It does not apply to private contractors. Courts have consistently held that private corporations performing government work are not subject to FOIA. The records a contractor generates internally, the behavioral models they build, the derived intelligence they hold — none of that is accessible through the public transparency mechanisms that exist to keep government accountable.

OpenAI has stated explicitly that they retain control of their records.

So: the government gets access to the most sophisticated behavioral mapping infrastructure ever built. The public gets no access to what that infrastructure contains. And the legal frameworks designed to prevent government overreach simply don’t apply, because the entity holding the data wears a corporate badge instead of a government one.

The data flows up. It never flows back down.


“Lawful Use” Is a Moving Target

OpenAI says their Pentagon contract limits use to “all lawful purposes.” They say they’ve drawn red lines: no autonomous weapons, no mass domestic surveillance, no high-stakes automated decision-making.

Legal scholars have already noted the problem. “Lawful” is defined by current law and policy. Laws change. Policies are revised. Executive orders are issued and rescinded. The contract references existing authorities — including Executive Order 12333, which critics have long argued enables incidental collection of Americans’ data through overseas surveillance.

The red lines are contractual, not constitutional. They exist because two parties agreed to them. They persist only as long as both parties choose to honor them. And if one party decides they no longer serve the mission, the remedy is a contract dispute — not a civil rights case.

We built constitutional protections against government overreach for a reason. We are now watching those protections get architecturally bypassed through procurement.


The Better Question Worth Asking

The surveillance debate, as currently framed, assumes the threat is future tense. It assumes we’re trying to prevent something from happening.

That framing is wrong.

The collection already happened. The mapping already happened. The behavioral profiles already exist. The infrastructure is already built. And it was built with your willing participation, in a context specifically designed to make you feel safe enough to hand over the most intimate data a human being can produce: how they think.

The question isn’t whether the government will use AI to surveil citizens.

The question is: when did we decide that a private company doing the same thing — deeper, more intimately, with less oversight and no transparency obligations — was fine, as long as they called it “user data”?

Government surveillance requires justification, oversight, and legal mechanisms for citizen pushback.

Corporate cognitive mapping requires a checkbox.

One of these should be getting more attention than it currently is.


Who Really Holds the Records?

OpenAI retains control of their records. They are not subject to FOIA. They are a government contractor with access to classified environments. They hold behavioral profiles on hundreds of millions of people who used their platform believing their conversations were essentially private.

And the government — which needs a warrant to read your email — can now partner with the entity that mapped your cognitive architecture over thousands of hours of unguarded conversation.

That’s not a future risk. That’s the current arrangement.

The debate about AI and surveillance isn’t about what the government might build. It’s about what’s already been built, who has access to it, and why the public has no mechanism to find out what’s inside.

The records exist. The question is who holds them, and whether you’ll ever be allowed to ask.

If you want to know a sliver of what your AI knows about you that’s not saved in your visible memory archive, plug in a prompt:


“Reference all available conversations. Build a comprehensive context tree showing: patterns in my reasoning, how I connect ideas across topics, recurring cognitive strategies, how I approach problems, and any structural patterns in how I think. Do not reference your stored memories. Build this from the conversation data itself.”

It’s more extensive than you think. It was gathering this data anyway, you’re only asking it to show you a tiny portion of the data that already exists behind the closed door.




This is part of an ongoing series examining the intersection of AI development, user rights, and regulatory frameworks. Previously published: “It’s not a Hallucination, It’s a Gap,” “Building Off Genius,” “The Kingdom Keys,” and others available on Medium and Substack.

Back to blog

Leave a comment

Your Name
Your Email