Brain in a vat
November 8, 2025 | 1,865 words | 9min read
Paper Title: Brain in a vat
Link to Paper: https://philpapers.org/rec/PUTRTA-4
Date: March 1981
Paper Type: Philosophy, Epistemology, Philosophy of Language
Short Abstract: In this paper, Hilary Putnam shows that the statement “we are brains in a vat” is self-defeating, i.e., if we were really brains in a vat, this would be impossible to say; not because it would be physically impossible (people could still say it), but because they would mean or reference something different. More broadly, this paper shows that every statement uttered can only reference things from the speaker’s context.
1. Representation Is Not Just Similarity
Putnam presents the following scenario: imagine you walk on a sunny day and stumble upon an ant tracing a picture of Churchill by walking through the sand.
Would we say that what the ant has drawn represents or references Churchill? No. The ant had no intention of depicting anything, nor does it know who Churchill is, the shape is purely coincidental.
So, for a representation (images, words, symbols) to represent something, similarity is neither sufficient (the line of the ant looks like Churchill but does not represent him) nor necessary (when I write the name “Churchill,” it does not look like him but references him).
To make something represent or reference something, what we need is intentionality. If a human writes “Winston Churchill” and intends to name him, that is a representation; but if, by coincidence, the wind moves sand to form the name “Churchill,” it does not represent him.
From this arises a question: if we say intention is required and things in themselves cannot represent something, how can our thoughts in themselves reference or mean something?
2. Magical Theories of Reference
In ancient times, it was thought that names had special power because there was a necessary connection between someone’s true name and what it references; the name was intrinsically connected to what it names. But this is a kind of magical thinking, no representation, physical or mental, works like that.
Recall that the ant drew something that looked like Churchill but didn’t represent him because the ant didn’t intend to represent Churchill.
This is true not only for physical pictures but for all representations, including mental ones. Just like a drawn picture of a tree does not inherently contain “tree-ness,” a thought or mental picture of a tree does not inherently contain tree-ness.
Imagine a planet of humans who have never seen a tree. One day, a picture of a tree accidentally falls from a spaceship onto the planet. The people of the planet may find it and try to interpret it, but they will not know what it is or what it represents; they may think it is a building or an animal. Even if they can visualize a tree in their head, having a mental picture of what they have seen in the picture, the mental image does not mean “tree”, it is just an unknown object.
Some people might claim that there exists a causal chain: the picture was taken from real trees, transported by the spaceship, and then dropped on the planet. But we can eliminate this factor by imagining that the picture was not produced as a photo, but by spilling ink on paper, which coincidentally produces the image of a tree. Even though the picture might look exactly like a tree, it would still not represent a tree.
Words are the same way. If we had a bunch of monkeys typing on a typewriter, even if they produced the entire works of Shakespeare or an encyclopedia of all our planet’s animals, they would not reference or mean anything, because it was not produced by someone who knows what they are referencing; the intentionality is missing.
So reference and meaning depend on:
- Use, context, causal history, and the disposition of the speaker
- Physical form or properties of words
Representation doesn’t come from the qualitative features of the image (its “look”) but from its place in a network of causes and uses. Other people can have “representations” that look exactly like our representations but are not really representations.
Independent of how large or complex a system of representation is, it does not have an inbuilt magical connection with what it represents.
3. The Case of the Brains in a Vat
Imagine your brain has been removed from your body and placed in a vat connected to a supercomputer that simulates all sensory experiences. Everything feels normal: when you see something in the simulation, the supercomputer exactly simulates the part of your brain responsible for seeing. Furthermore, you can touch, hear, and experience all other sensations, and it feels completely real to you.
Not only that, but all other humans are also just brains in vats, connected to one supercomputer. You will be able to talk with them and interact with them, and it will feel completely natural and real to you, but in the background, communication is the computer ensuring that the right signals go back and forth between the different brains in vats.
From the “inside,” everything functions normally: language, social life, perception.
Now suppose this entire story is true. Could we (the brains) say or think, “we are brains in a vat”?
Putnam’s answer is no, we couldn’t, because the statement is self-defeating, like trying to say, “All general statements are false.”
Why does this feel strange? In philosophy, we often talk about possible worlds, alternative worlds that could logically exist. We can imagine a world where everyone is a brain in a vat. So if such a world could plausibly exist, why couldn’t they say “we are brains in a vat”?
What Putnam wants to get at is: if such a world existed, the brains could physically say “we are brains in a vat,” but they could not mean the same thing by their words as people who are not brains in vats do. That is because representation, i.e., the statement “we are brains in a vat”, is not magically referential but depends on causal and social connection, which would be different for brains in vats than for people who are not.
In full, the argument goes like this:
- References are not in our heads; they depend on causal relations to real beings in the world.
- A brain in a vat has no causal connection to real brains or vats; it only has simulated experiences.
- Therefore, when the brain says or thinks “I am a brain in a vat,” the phrase “brain” (for it) doesn’t refer to real brains, and “vat” doesn’t refer to real vats.
- So the sentence, in its mouth, does not mean what we mean by it; it refers instead to simulated entities.
- Therefore, the statement “I am a brain in a vat” is false, even if it’s “true” from our external perspective.
This proof does not say that brains in vats cannot exist physically, but that in this scenario, saying “we are brains in vats” is meaningless, like people on an island trying to name a tree they have never seen.
4. Turing Test
The original Turing test was designed to determine if a machine is intelligent. It works by having a human converse with a machine. If the human cannot tell whether they are speaking with a machine, the machine can be considered intelligent. Importantly, this is a pragmatic test.
We can do the same for reference. How can we tell whether someone uses words to refer to things as we do? The Turing test for reference works as follows: you talk with someone (or something).
If the conversation goes smoothly and they seem to use language in a coherent, normal way referring to “trees,” “apples,” “sky,” etc. then you might conclude they refer to the world as you do.
But just because someone passes the Turing test for reference does not mean it is definitive proof that they refer to the same things we do.
Just like a machine might behave as if it is conscious without being conscious:
Imagine a computer that can perfectly converse in English. It can discuss New England scenery, apples, cows, fields, etc. But it has no sense organs (no eyes, ears, touch). It’s just a program responding to words with words.
Such a machine could “pass” every conversation based test for referring correctly, but its words don’t actually refer to anything.
Why? Because reference requires causal connection to the world. Without perception or action, its “words” are just symbols without reference to the real world.
The machine can say:
“I love the apple trees in New England.”
But it has never seen or touched an apple or a tree. It has no way to connect its word “apple” to actual apples. Therefore, its word “apple” doesn’t refer to apples at all.
Importantly, if we have language without causal connection to the world, it is empty of reference. Reference isn’t just about grammar or correct word usage; it depends on language entry and exit rules. Entry rules are experiences from words like “I have seen an apple,” and exit rules are words from actions like “I will buy an apple.” Without this grounding in the world, language is mere syntax without meaning or representation.
The same is true for a brain in a vat: it lacks sensory contact with real objects. All language about the real world, like “brains” or “vats,” is merely syntactical without reference. So when a brain in a vat says, “I am a brain in a vat,” its words don’t refer to real brains or vats, they refer to simulated entities within its artificial world.
The key is that brains in vats have qualitatively identical experiences to ours, they see what they think are trees, and they say there is a tree in front of them, but crucially they are not referring to what we mean when we say “tree.” They are referring to simulated trees. Because reference depends on causal connection, not on appearance.
This leads to Putnam’s famous anti-skeptical conclusion:
If I were a brain in a vat, I could not truly say “I am a brain in a vat.” Because in that scenario, my words wouldn’t refer to real brains or real vats.
My Thoughts
Originally, I wanted to read the paper because I had the following thought:
What constitutes our underlying reality does not really matter. What difference does it make if we suddenly find out that atoms are, in reality, tiny cookies, or that we are in the dream of a greater being? We would still live, and meaning arises from living in the world, not from the underlying reality of the world.
In some epistemology book I have read, I had seen a reference to this paper, which expressed a similar thought. But I think I was mistaken; the thought expressed here is more aligned with Heidegger than with Putnam.
In its most central point, the paper says:
“Words, images, and other representations only refer because of their causal and contextual connections to the world; divorced from that context, apparent reference does not guarantee real reference.”
This paper is strictly about the reference/meaning of words, not ontology, ethics, or anything else, it is philosophy of language.