Jimmy Miller

ChatGPT Doesn't Know Anything

ChatGPT doesn’t “know” anything about recent events. ChatGPT “hallucinates” facts. ChatGPT can “understand” many different kinds of questions. ChatGPT doesn’t “care” about truth. ChatGPT is “lying” to you.

For some reason, we have collectively chosen to talk about ChatGPT using words typically reserved for humans. More precisely, we have chosen mental words to describe what ChatGPT does. Words that seem to imply ChatGPT has a mind, has beliefs, has a certain noetic structure, that it stands in relations to propositions, to truth, that it can think, and believe, and know.

Of course, to use mental language is not always to literally ascribe mental properties to something. We often personify things to aid in understanding. “The electron wants to be at the lowest energy level possible” is not a statement made by someone who truly believes electrons have desires. Yet, when it comes to ChatGPT things are a bit more muddled.

Most people would agree that ChatGPT is not conscious, nor is it the first instance of true artificial general intelligence. In other words, we don’t think of it as a person. Yet, to suggest that ChatGPT doesn’t understand anything or doesn’t know anything is met with disagreement or disbelief. The same cannot be said for the electron and its desires.

Taking Mental Talk Metaphorically

Why do we take mental talk about ChatGPT to be so fundamental? Are we swayed by media portrayals? Are we unreflectively accepting “Machine Learning” as a literal statement? I think the problem is quite a bit deeper. No longer do we hold the mental as fundamental, but as merely a type of computation. To suggest that ChatGPT knows something does not strike us at all as a metaphor but as pure statement of fact.

But if we step back for a second, what exactly do we mean by ChatGPT “knows”? It’s a bit hard to say. Perhaps we mean that ChatGPT contains a lot of information? This is similar to saying things like “facebook knows a lot about me”. But if it is just about containing information, why does it strike us as odd to say “This book knows a lot about world war two”? Instead, we’d find it more natural to say “This book contains a lot of information about world war two”.

So if it isn’t ChatGPTs ability to contain information, perhaps it is about appropriate reaction? When we say something like “The car knows who is in the driver's seat”, what we mean is that the car reacts differently depending on who sits in the seat. So ChatGPT knows how to write a poem, how to respond to questions, and how to summarize things. Perhaps it is the combination of these two senses of "know" that make us take these attributions so seriously.

After all, isn’t that what we mean we say humans know things? They contain certain information and they can use that information to perform actions. For some, perhaps that statement didn’t seem odd, but for me, it seems very out of place. Do we really want to suggest that humans “contain information”? Do we really think knowledge is nothing more than information and reaction?

Human Knowledge

“What is knowledge?” is a question that has preoccupied western philosophy since Socrates. But, for over 2000 years there was general agreement about what exactly knowledge consisted of. It was a simple formula, Justified True Belief. Ever since Edmon Gettier published his three-page paper entitled “Is Justified True Belief Knowledge?” this definition has fallen out of favor. But even if it isn't completely correct is is close enough for our purposes. Generally speaking, philosophers still hold that knowledge is true, belief plus something else and that is all we will be concerned about here.

For a person to know something, it is not enough for them to merely “contain information” but to believe something. It would be quite weird for us to say "I know X" and then if asked if we believe X we say no. So what does it take to have a belief? Well, at the very least, it requires us to have a certain attitude toward a proposition. When I consider a statement like “George Washington was the first President of the United States” my attitude towards that statement is quite different from “The present King of France is bald”. To believe something is more than to state it. It is more than to be aware of it. It is to accept it.

Do we think ChatGPT has beliefs? It seems to clear to me we don’t. Do we think that ChatGPT has a certain attitude toward the output it produces? It's clear it does not. So ChatGPT already fails to have knowledge in the way humans do. Given this, we can skip the question of justification. We can ignore the truth criteria, we need not dive into this epistemological debate. ChatGPT does not have beliefs. So when we use the word know, it is not the same as our use for humans.

ChatGPT as a Text

Our choice of mind as the guiding metaphor for ChatGPT and all similar AI systems has led to quite a bit of confusion. Our metaphors have been taken too literally. We talk about ChatGPT knowing and understanding, we act as if ChatGPTs use of language is the same as ours. Our metaphors trick us into believing that ChatGPT itself is participating in our language games.

But this is merely a confusion. ChatGPT does not have intent. It does not have goals. It does not have propositional attitudes. It does not have a theory of mind or a perspective. ChatGPT is not a person, but a text. ChatGPT contains information the same way a book does. Where it differs is just in how we retrieve that information.

Espen Aarseth coined the term “Ergodic Literature” to refer to a text that takes non-trivial effort to traverse. These include books like choose your own adventure where the reader must choose how to traverse the text. Espen extends this notion to include Cybertexts, texts which utilize computation in their production. A text-based adventure game is a great example of such a text.

Here we see a much better way of thinking about ChatGPT, as a incredibly large Cybertext. One whose contents are traversed via natural language. Whose contents contain true facts, but also alternative realities. Rather than hallucinations, we get falsehoods. Rather than knowing, we get generating.

Why Care?

Perhaps you find this distinction useless. Yes, ChatGPT isn’t a mind. But incorrect language has its use. It is much easier to talk about ChatGPT as if it could do all these things and by all appearances, it seems to be. What difference does it make if it “actually” understands vs it is appearing to understand? I will admit I have no argument against this. This pragmatic bent has never had much appeal to me.

What seems important to me here is that we often make judgments and decisions based on these bad metaphors. We use these metaphors as premises in our lines of reasoning taking them literally. When issues arise, the use of these metaphors often clouds the issue. We find it harder and harder to draw the distinctions we want to make.

I know my short blog post here isn’t going to change the vocabulary we use. It probably won’t even prevent me from speaking in that way. But I think it's crucial if we are going to reflect on these new systems that we think about them properly. Thinking about ChatGPT as a text is enlightening. It places on us the burden of interpretation, it opens up its output to criticism, it asks questions like, what does a high-brow, literarily sophisticated ChatGPT look like, how can we get taste in our creation of models? It shifts the discussion away from attributes of minds, to attributes of texts.

If we are to understand and utilize language models to their fullest extent, I think it is crucial that we change our conceptual model. We must think of ourselves not as reasoning with a person, but as exploring a text, discovering how to traverse it well. Finding the parts of the text that interest us, that help us tell the stories we want to tell.