In Brains in a Vat, Hilary Putnam puts forward his strange rebuttal of descriptivism, while touching on some issues of metaphysics and philosophy of mind which seem unrelated to reference but become quickly intertwined. Putnam uses several thought experiments to qualify his premises. He first claims that physical and mental representations have no necessary connection to their referents. He then uses the titular thought experiment to put forward a conclusion which he arrives at later : that brains in vats could not successfully refer to outside objects. Next he introduces the turing test for reference which he shows to be inadequate. He then returns to the brains in the vats, and claims that if you could have a text conversation with one of these vat-brains, they wouldn’t be successfully referring. In the final two sections he further outlines his argument, and draws on the idea of concepts to show what a successfully referring thought might look like, if they existed.
Some of the paper, I can see where he is coming from, but I think his methods and theory come off as more fantastic than those which he himself calls magical. There are several main caveats in his process, and beyond that, my view and his differ in ways which really come down to how you want to interpret the argument, and our starting definition of the word “reference”.
In the first section, he uses examples about “accidental” representation to claim that physical and mental representations do not necessarily refer. His examples are an ant which accidentally draws Winston Churchill, and then a long built up scenario in which a hypnosis patient recites japanese poetry with corresponding mental images and a feeling of understanding, but still does not understand japanese. In both of these cases, accurate physical and mental interpretations are failing to refer.
“Thought, words, and mental pictures do not intrinsically represent what they are about” (page 5).This is a crucial premise of the paper. His definition of reference, and his relation between intrinsic representation allow for him to build up to the vat brains.
Here is my first problem with the argument. Putnam completely overlooks the significance of resemblance and recognition. It is unclear if he thinks that the ant’s drawing fails to refer after someone with the correct cultural understanding has recognized the resemblance. This is an important part of the reference process.
Recognition of a symbol is the same type of mental action as creation of a new meaning bearing object. I am convinced that anything with sufficient complex organization bears some sort of potential “meaning”. This has me imagining an alien which can interpret DNA, and could picture any person if given just a hair sample from them to “read”. Or maybe it’s only objects which are, or have an unlikely resemblance to something with intentional organization that can be interpreted.
In the next section Putnam uses the first premise to jump to his desired conclusion which is that brains in vats would not successfully refer to anything external: “In particular, they could not think or say that they are brains in a vat” (page 8). This does follow from his first premise, but needs the further clarification provided by his Turing test for reference.
Here the vat brain argument already seems circular. His clarifications later in the paper help but I still find It very peculiar to both assume that our actual objects are actual, but then to also claim that even though it is a physical possibility we are vat-brains, vat brains could not refer to actual objects (in the way we have defined them from our own experiences). I would have needed him to admit that it’s possible he was not referring, or just simply that any vat brain or real person can not successfully refer to objects in a different reality/simulation than them.
In the Turing test section, Putnam puts forward a test for reference, but ultimately claims that it is not sufficient. “[A] machine which can do nothing but play the imitation game is clearly not referring any more than a record player.”(page 12)
Here I can level with him much more. I agree that the imitation game would not be sufficient for proof of reference, although it may prove that the computer’s programmers successfully refer. I will also concede that a vat brain could not successfully refer to vats and brains outside of its own simulation. This is a strange way in which my view tries to fix the circularity which I sensed in Putnam’s argument. While a texted conversation may not be enough, a face to face with a human or robot may be enough for a reference litmus test.
I would have Putnam know that I would want to test a robot which was programmed to be human-like, not one designed to pass my test. This robot would have sensory inputs as well, and in order to pass would need to have the types of corresponding reactions to certain experiences which I would expect out of a successfully referring human.
In the sixth section he explains a bit further and I do see where he is coming from, but at the same time, I don’t think posing this thought experiment gives you the grounds to define what we actually refer to. I agree vat brains sitting in a room next to me ( where I am in a normal body etc. ) could not refer to anything in my reality, but to say they don’t refer to anything at all, or to anything “external” makes the supposition that we are not in a simulation, which was a possibility we needed to entertain to arrive at the conclusion.
In the final section he draws on the idea of a concept: “If there are mental representations that necessarily refer (to external things) they must be of the nature of concepts and not of images” (pg17).He doesn’t really say what a concept is, but ultimately ends with another conclusion: There are not mental representations which necessarily refer to external things. Here is where my response picks up a bit of the mystery, and hopefully not magically either.
My view holds that there are shared cultural symbols, and personal, private ideas. These abstract entities roughly correspond to each other. How do they correspond to each other? Purely by the structure they are situated in, and how they are situated. Putnam talks about how concepts are not perceivable mental features, my argument is that concepts are symbols, ideas, and the nuanced, specific, complex and plastic types of connections between symbols and ideas. Ideas make a web in each person’s mind, and there is a roughly corresponding general web of symbols for any culture.
This view accommodates many of the problems Putnam has put forward. In the case of the ant, the drawing does not even resemble anything until a sentient being with an idea which corresponds to the shared cultural symbol of Churchill recognizes the resemblance. In the case of the hypnosis patient, in similar way their mind has been used as a type of recording device. If given a chance, a japanese speaker could interpret the words, or the patient could draw the pictures for someone( within the culture of “people who understand what trees are” ) to recognize them.
All of these examples show physical and mental structures which have potential referentiality, but it will stay as potential until interpreted by someone with understanding. This feeling of understanding, could be a feeling of one-to-one correspondence between a large set of interconnected personal ideas and another set of interconnected cultural symbols. A true understander would in actual fact have the one-to-one correspondence (and not just a feeling) and would probably also have extra connections between concepts which are not necessarily as present in the cultural web( a mathematician uses their own private shorthand and begins to “think” in it). By a culture I don’t mean just a nation, or religion, or language. This type of group varies in size and structure, but all they need is more than one person and a set of conceptual relations and symbols which are common to them. This counts me and my friends as a culture (inside jokes may lead to common connections, or develop into a private symbol between us in the culture) within several greater cultures (Tulane, New Orleans, USA, English speakers, etc.).
Under this view, any group of beings which can identify complex patterns of input and then successfully develop a cultural symbol for a commonly experienced pattern of input is successfully referring, as long as each person’s idea corresponds to the symbol. All the vat brain needs is the same ability as us to form an idea from a pattern of sense data. Early in life, we do this with everything, constantly associating all chairs which we come in contact with with our idea of “chair”. After we learn the first word, and probably clear up in our head the correspondence between our idea and the symbol in the same few years (a 3 year old may call couches and stools and beds all “chair”, until they acquire the words, which involves learning the right relations between those symbols, and creating a new idea for each one).
This whole view was inspired by my body in a vat twist on Putnam’s thought experiment. The purpose of hypothesizing about a physical human being just getting into a physical virtual reality shows how fluid our connection with what we refer to is. Putnam cannot make claims about the referential ability of people in complete simulation (brain in vat) without accounting for partial simulation (VR, body in a vat).
In the body in the vat, my subject lives his life referring successfully here in our real world, and then goes into the capsule for what he thinks is ten seconds, and then is stuck in the simulation forever. Does he begin failing to refer once in the simulation? I think not. If the simulation were convincing ( this would require its programmer being well acquainted with all of the same cultural symbols as the subject), he would have very similar corresponding bundles of sensory data fed to him which he would recognize as his everyday experience. Any new experiences he had would stay consistent in the simulation as they are in real life. What if he tries a new food after he gets into the capsule? It’s possible that he is now referring incorrectly when he says he enjoys clams, but more likely I would say that he now correctly refers to cultural symbol for clams, within the simulation. This train of thought can be further expanded to a “simulation room” or a giant capsule with climate control, projection/holograms, etc. This person could move in physical space but would still be in a simulation.
These examples show that our sensory connection to experiences which we refer to is not black and white, and there is a spectrum between real and simulated sensation. The causal picture cannot account for when our senses are wrong, and in what ways they can be wrong.
In the end, Putnam’s view is very hard to completely refute, and makes sense at least to the level that any simulation-person could not refer to objects external to their reality, but I cannot agree with him that this gives us some kind of certainty in the actuality of our own experiences. If he is right, there is either a chance that we do not actually refer in the way that he has construed it, or what we refer to in any case, simulation or not, is never actually an external object but rather a set of sensory input patterns which correspond to a symbol or idea which we can understand.