Where Does My AI Store Its Knowledge?

An ongoing question that confounds even the AI gurus: where exactly does a large language model store its knowledge? The reality is we don't know or understand how these systems do what they do.

What? But you AI guys wrote the program, right?

Yes – but:. The AI guys designed neural networks that were then used to TRAIN the neural network to learn to give correct answers.  What nobody realized (experts included) was how effective this modeling of the human brain would to be.

So the fact that we have no clue about how AI stores information shouldn't surprise us. We don't have a clue how the human brain stores information either. We know that neurons communicate with each other—some neurons trigger the firing of other  neurons, some  inhibit firing. (Think about how learning a physical skill is very much learning what not to do). In artificial neural networks this biological dance has its equivalent in gradient descent, the algorithm which raises or lowers the weight of neuronal connections based on how well the model performs.

So, asking where ChatGPT "stores" its knowledge about Shakespeare is just like trying to pinpoint  where the memory of your first bicycle ride lives. Can’t be done.

Consider this: if I ask you to describe your first bicycle ride, you don't retrieve a pre-written paragraph from some mental filing cabinet. Instead, you generate words on the fly from fragments—images, feelings, sensory memories—that come together to support your verbalization. When we converse with an AI, we read coherent sentences, but those sentences are the end product of a similar process: patterns distributed across millions of parameters interacting and coalescing into language. Both human memory and AI responses emerge from the dynamic interaction of countless connections, not from stored text waiting to be retrieved.

Frank Coyle

My journey into the mechanics of intelligence and consciousness began during a summer internship at Modesto State Hospital in California. Observing the limitations of traditional psychology in treating severely disturbed patients sparked a four-decade quest to understand how the mind truly functions—and how to better support its complexities.

This pursuit evolved into a rigorous academic path, from psychology at Fordham to the neuroscience labs at Emory. There, I transitioned from dissecting biological brains to discovering the mathematical elegance of McCulloch-Pitts neurons. The arrival of the PDP-8 computer in our lab served as a catalyst; I recognized the same neural patterns I was studying in biological systems mirrored in early computational architecture.

Following a 31-year tenure as a Professor of Computer Science at SMU (where I was known as "Dr. C"), I am now at UC Berkeley, focusing on the frontier of Generative AI and Large Language Models (LLMs). I bring a unique "brain-to-bits" perspective to the classroom, helping students bridge the gap between biological intelligence and modern technology.

My teaching philosophy is guided by the wisdom of avant-garde composer John Cage:

Nothing is a Mistake
There is no win; no lose

Only MAKE!

https://frank-coyle.ai
Previous
Previous

2025 Reflections.. 2026 Next Steps

Next
Next

Is AI the end of Computer Science?