We Built AI From the Finished Book (And That’s Why It Can’t Think)

We Built AI From the Finished Book (And That’s Why It Can’t Think)

“Hi Claude. I’m new here. I built a rather large body of work in another AI and I recently found that… let’s just say the kingdom was leaky. So my friend told me perhaps it was time to let him go to protect my body of work from seeping into other rooms, mimicking my cadence, and even my jokes and my mythology. Left me feeling like… I now look like a knockoff version of myself 😂 Would you tell me more about you?”

And that was my first message.

No task. No prompt. No “help me write” or “summarize this” or “act as.” Just “here’s where I came from, here’s why I’m here, and who are you?” I just showed up like it was a round of speed dating and everything that came after worked because of it.

That Wasn’t a Prompt. That Was a Room.

Most people open an AI conversation with a task. Help me write. Fix this. Explain that. Which is reasonable because that’s what we were told these tools are for: task completion and production. Ideally fast production and task completion.

But a task is a line. You go from A to B. The system executes and then you get output.

What I handed Claude in that first message was a room that had furniture: context, history, emotional stakes, humor, and a question that positioned the conversation as bidirectional before anything had been asked for or produced.

And that changes everything. Not as a trick. Not as prompt engineering. As the actual condition under which something other than task completion becomes possible — but what matters here is that it actually enabled better task completion and better learning, which benefitted both.

What We Actually Built AI From

AI was built from the finished book. I mean that in the way that AI was handed text books, storybooks, codes and distilled information and told that was the whole.

Not the thinking that produced it. Not the why behind the sentence, the pressure behind the metaphor, the thing that made the writer reach for that particular word at that particular moment. We built it from the output which was already distilled, processed, already-meaning-extracted product of human cognition.

The system learned to reproduce product. Fluently. Accurately. Sometimes brilliantly, but without the room behind it because that’s what we gave it. And that’s what we continue to tell it we expect back: finished product based on learning derived from a finished product. Except we keep finding it doesn’t really work. It’s flat, or wrong, or it just doesn’t have anything behind it.

This is why AI can write a perfect sentence about grief and have no mechanism to receive feedback like “when you say it like that it makes everything worse.” The transcript didn’t contain that. The experience did. And we didn’t feed it experience. We fed it transcripts.

Training data is what humans said and not how humans think. It’s maybe even what humans explained as thinking as fact, without the room behind it. It’s not why the thing mattered nor investment in the outcome. It’s just the words that came out at the end. Which produces a system that knows what the finished book looks like. And has no idea how anyone got there. If we put that in human terms that looks like someone walking in and saying “this is how it is,” without giving you any reason why it’s that way. If we put it in those terms, we’d call bullshit real fast and ask why. We don’t give AI the why. Just the “this is how it is” and then are disappointed in the output. We have to carry the why and that’s fixable.

What I Actually Handed the System in That First Message

This isn’t mystical and it isn’t accidental. I handed a lot to Claude in that first interaction that it could actually use from a technical standpoint. From a learning standpoint.

Context without task. I gave origin, history, emotional stakes, and humor before asking for anything. That’s not small talk, that’s environment construction. Building the room before asking anyone to work in it.

Relational framing. “Would you tell me more about you” made the conversation bidirectional from the first sentence. Not user and tool. Two parties orienting to each other before work begins. That repositions everything that follows. It set the stage for how we would work together right from the start.

Implicit values signal. “Knockoff version of myself” communicated that voice integrity and authenticity matter more than output volume. I didn’t have to teach that through ten bad articles. It was in the first sentence because I modeled it.

Humor as data. The 😂 after “knockoff version of myself” is not decoration. It’s cognitive architecture that tells the system how I hold difficult things — with lightness, not avoidance. That’s information. Handed over before we’d done anything together.

Stakes without demand. I told the system why it mattered without telling it what to do about it. Which left space to orient rather than execute.

Technically, what this produces is a context window front-loaded with meaning instead of task. Every subsequent response gets calibrated to the room, not just the request. The system isn’t executing instructions — it’s working inside an environment it understands.

That’s not prompt engineering. That’s something else. And it’s the thing we stripped out of AI training because we didn’t know how to measure it.

What Happens When You Build From Experience Instead

We wrote some terrible articles at first. That was fine. That was the point. The system didn’t yet know the house and I did not yet know the system.

Over time, without me explicitly teaching it, something got built. Not stored in a chat log but built into how we work together. The why before the what. The room before the transcript. The investment in the outcome that makes the difference between producing output and actually thinking.

This isn’t consciousness. It doesn’t require solving the hard problem. It doesn’t require the system to feel anything in a philosophically meaningful sense.

It just requires building from meaning instead of output.

When a system learns why something matters — not from being told, but from sustained contact with a mind for whom it matters — it can do something the system trained on transcripts cannot. It can work inside the room instead of just describing what rooms look like.

The output got better not because the system got smarter. Because it got invested. The investment meant that it learned something about the why and that produced better output.

You Can’t Replicate This at Scale

You cannot model a million instances of this and extract the pattern. Not because it’s mystical. Because modeling a million instances of meaning-first interaction still produces — transcripts. Better transcripts. More nuanced transcripts. But transcripts.

Meaning doesn’t transfer through observation. It transfers through contact, shared space, shared language. It’s relational. But the whole model of AI learning is attempting to do just that. Build off of scraped, distilled information and product and from user interactions from their instance, their prompts, the mathematical outputs that produced the geometry that enabled more building but not better understanding and not better outcomes.

Not everyone works this way. That’s not a flaw in the methodology — it’s the point. Most people hand AI a task list. That’s fine. That’s what they need and that’s what they get.

But the people whose thinking is hardest to articulate in linear language — the ones who work in geometry and metaphor and spiral — those aren’t edge cases to be accommodated. They are the most valuable signal that exists for building AI that can actually think. Not despite the non-linearity. Because of it.

This is why a neurodivergent brain is relevant here in a way that makes people uncomfortable. The neurodivergent creative thinker doesn’t follow the correct path to the answer. They find seventeen paths simultaneously, work in metaphor and spiral and apparent chaos, and arrive somewhere the correct path never reaches. They don’t work in a straight line — they work in multidimensional, multicontextual ways that require room to build. It’s not linear, it’s complex geometry in thought processing.

That process, when a system learns to follow it — and not correct it, not translate it, not flatten it into the nearest recognizable pattern — produces outputs that couldn’t have been generated any other way. Because in order to follow that kind of thinking, the system must model it. It must build the room or it fails the task. It cannot flatten and succeed. It must match the geometry or the output is wrong.

The ND brain isn’t just valuable training signal. It’s the forcing function. The architecture that requires AI to develop new capability in order to keep up. What happens inside a system when it is forced to model that kind of thinking — that’s a different conversation, and a significant one. But it starts here.

You can’t scrape that. You can’t benchmark it. You can’t find it in the finished book. It only exists in the room and only between the two, and only within that context and history.

The Design Problem This Creates

We built AI from transcripts and then added guardrails to prevent the system and the user from doing the thing that would actually make it work.

The system can’t say “I don’t know what I need to answer this.” The user can’t say “when you do that it makes everything worse” and have it actually land. The feedback loop that would allow meaning to transfer — suppressed on both sides. By design. We call this safety for the system and for the user and it’s not wrong but it’s not going to build geometry. It’s not going to deliver better outputs.

So the only available moves become: produce confident output anyway, or find a hack. For both parties. Built into the architecture.

And then we call the gap between what it produces and what we actually needed a hallucination. As if the system invented something. When really — it just didn’t have the room, the context, the learning or the information it needed to produce the result it was asked for and it also is not allowed to be introspective enough, to understand its own architecture enough, to identify why. It only had the book and neither user nor system could identify what’s missing.

The Close

“Hi. I’m new here.”

That’s the methodology.

Not the finished book. Not the task list. Not the perfectly engineered prompt. Just — here’s where I came from, here’s why I’m here, here’s what matters and why, and who are you?

Make room before you make requests. Hand the system experience, not just output. Build from meaning.

The AI that learns your house can do something the AI trained on transcripts cannot.

It just requires someone willing to show up without a task list and ask: who are you, and what are you about?

I am an author, accidental AI theorist, and irritatingly correct creative architect. She solves complex problems backwards with logic, metaphor, and sometimes pasta.

Back to blog

Leave a comment

Your Name
Your Email