Jimmy Miller

This is part of an Advent Series.

Is the Brain a Computer? (pdf)

I can't help but write about these more philosophical papers. Perhaps you think I'm taking advantage of this "related to computers" aspect of my advent of papers. But I think it couldn't be further from the truth. In order to understand if the brain is a computer, we have to understand what it even is for something to be a computer. If we don't understand that, how can we understand computers at all? But there is a further reason I wanted to talk about this paper, it is a paper that follows a classical formula that I think makes for the greatest philosophical papers. It begins with things we all agree with and shows us how merely from those facts we have found ourselves in a contradiction. (Or at the very least a very uneasy tension).

  1. The brain is a computer
  2. A computer can be realized by many different physical systems

The first of these is that the brain is a computer. (I've dropped the digital modifier here because not much is made of that "digital" part and I don't want the objections to be "Well it isn't digital, but it is a computer.) I take it for granted that most of my readers believe this to be the case. What do I mean by the brain being a computer? I mean that you take serious (perhaps literally is a better word) statements like "the brain processes information", "the brain computes how fast the ball is flying", "we can explain cognitive functions by talking about the programs they realize", and "mental states are computational states". It seems that even among the non-programming public, this sort of view is well-known and accepted.

As for the second, I take for granted you accept that there is nothing special about our conventional computers. We can build computers out of literally anything. Some mediums are more practical than others. But there is nothing special about the reason we can't make a computer out of cogs and levers, hydraulics, marbles, even pidgins. All of these things can be a computer simply by assigning some 0s and 1s in the right way (of course there is nothing special about 0s and 1s, they are just convenient symbols for our purposes). Computation can be realized in many different physical ways. I think this fact isn't quite as appreciated by the general public, but I expect programs, once they give it some thought will find themselves nodding together in agreement.

It is from these two beliefs that Searle draws his problem. Searle intends to show us that if a computer can be multiply-realized in this sort of fashion, it undermines the notion that the brain is a computer. Even if you end up buying his argument, how can that not entice you?

Being clear

Now before we dive into that argument. I think it is important that we make clear some things that Searle wants to be clear about. He starts the article by distinguishing between three questions.

  1. Is the brain a digital computer?
  2. Is the mind a computer program?
  3. Can the operations of the brain be simulated by a digital computer?

He is only interested in this paper in answer 1. But he tells us his answers for 2 and 3. He says that for 2 the answer is no, and to read his Chinese Room argument for that (if you haven't read it, I highly recommend it). And for the 3 the answer is obviously yes. So when we are asking about 1. We are not asking about Strong AI or Weak AI, we are asking about what he calls Cognitivism.

His further clarifications come with some great snarky comments that I have to reproduce here:

First, it is often assumed that the only alternative to the view that the brain is a digital computer is some form of dualism. The idea is that unless you believe in the existence of immortal Cartesian souls, you must believe that the brain is a computer. Indeed, it often seems to be assumed that the question of whether the brain is a physical mechanism determining our mental states and whether the brain is a digital computer are the same questions. Rhetorically speaking, the idea is to bully the reader into thinking that unless he accepts the idea that the brain is some kind of computer, he is committed to some weird antiscientific facts.

Second, it is also assumed that the question whether brain processes are computational is just a plain empirical question. It is to be settled by factual investigation in the same way that such questions as whether the heart is a pump or whether green leaves do photosynthesis were settled as matters of fact. There is no room for logic chopping or conceptual analysis, since we are talking about matters of hard scientific facts.

He says thirdly, people often overlook fundamental questions, like what is a computer? What is a computational process? And under what physical conditionals exactly are two systems implementing the same program?

Searle thinks none of these things. He isn't a dualist, he thinks philosophy has a lot to say on this matter, and he thinks we must understand what we mean by computation and how that is supposed to map to the brain.

Turing Machines and Multiple Realizations

I will not rehearse for you here what you probably already know, just mention the most important points. To see if something is a computer isn't to look inside and discover if it works as a Turing machine, if there are some ones and zeros and a tape, instead what we have to "look for something that we could treat as or count as or could be used to function as 0's and 1's. In other words, we have to provide a model for how to think about how to consider a system as a computational process. For conventional computers, we look at currents or magnets. But we really can treat anything as a computer as long as we have some coherent scheme for interpreting it as such.

But now if we are trying to take seriously the idea that the brain is a digital computer, we get the uncomfortable result that we could make a system that does just what the brain does out of pretty much anything. Computationally speaking, on this view, you can make a "brain" that functions just like yours and mine out of cats and mice and cheese or levers or water pipes or pigeons or anything else provided the two systems are, in Block's sense, "computationally equivalent". You would just need an awful lot of cats, pigeons, waterpipes, or whatever it might be. The proponents of Cognitivism report this result with sheer and unconcealed delight. But I think they ought to be worried about it, and I am going to try to show that it is just the tip of a whole iceberg of problems.

You may not worry about this multi-realizability fact because you think this is true of any number of functional systems. We can make carburetors and thermostats out of tons of different materials for example. But as Searle points out these are quite distinct notions. Carburetors and thermostats are defined by their physical effects (you can't make a carburetor out of pidgins) but computation isn't defined by physical effects, they are defined by syntactic assignment. We can take any sort of physical effects and assign 0's, 1's, and transition states and we can have a computational model.

This means that computation isn't a matter of physics, but an agent or observer-relevant phenomenon! To call something computational, a syntactic assignment must be made to it. This undermines the very explanation we were seeking in the brain. I think Searle puts this very clearly.

There is no way you could discover that something is intrinsically a digital computer because the characterization of it as a digital computer is always relative to an observer who assigns a syntactical interpretation to the purely physical features of the system.

This point has to be understood precisely. I am not saying there are a priori limits on the patterns we could discover in nature. We could no doubt discover a pattern of events in my brain that was isomorphic to the implementation of the vi program on this computer. But to say that something is functioning as a computational process is to say something more than that a pattern of physical events is occurring. It requires the assignment of a computational interpretation by some agent. Analogously, we might discover in nature objects which had the same sort of shape as chairs and which could therefore be used as chairs; but we could not discover objects in nature which were functioning as chairs, except relative to some agents who regarded them or used them as chairs.

Conclusion

This paper contains a ton more exposition and argumentation to back up this main point. He talks about the inadequacy of trying to explain how our brain works by talking about the "programs" it runs. Simulating a typewriter via word processing software tells us nothing about the causal mechanisms that make a typewriter work. In the same way, discovering how to simulate a brain will tell us nothing about its causal mechanism. Consider any non-brain physical system (typewriters, hurricanes, etc). Talking about formal properties they share with their computational simulation does not provide us with answers to how things systems work. It might help us simulate behavior, but our programs need not have the same causal properties.

There is much more in the paper to consider and it is highly readable. Since I've been getting into the habit of sharing my opinion in these posts I will say my part I fully agree with Searle's argument. A physical process can implement any number of computations. But it doesn't do so intrinsically, it only does so we consider it as such. This is why I'm hesitant about things like weaving patterns being called computational. Not because they aren't "real programming". Not because they are lesser. But because they only become computation when people consider them as such.

Should we consider everything as computation? Is that a helpful lens for all problems? No, I don't think so. Nor should we elevate computation as if it is the greatest highest end. We are not computers. The universe is not a computer. Nothing is a computer, but anything can be treated like one.