Is your chatbot thinking?
Nowadays the interest in artificial intelligence and its applications is growing dramatically. Advances in technology and the increased computational power of modern computers are making the complex algorithms behind AI easier and faster to develop. A bot is a computer program with the ability to perform a series of tasks autonomously. Chatbots are a derivative that can simulate a conversation with a human, by accepting human input, interpreting that input, making a decision and taking action accordingly. Bots can now perform tasks previously only done by humans and together with the ability to understand and use natural language, chatbots are becoming more and more integrated into businesses. Ever since AI entered the industrial world, it has started to reshape the way businesses operate. Bots can now play a key role in a company’s life and culture. They can manage customer services, provide training, perform surveys among employees, assist customers during online shopping, perform personal assistant duties such as booking meeting rooms, arranging meetings, managing mailboxes, looking for documents and much more. Asking them to execute tasks directly in natural language removes the barrier between technology and the end user, since users no longer require extensive training to use them. But what happens behind the scenes? Do chatbots really understand what people ask them? Is there a thinking being somewhere in the Cloud that understands and answers our questions?
The debate whether machines can go beyond simply mocking human behavior has been around for a long time. The famous British mathematician Alan Turing came up with an idea to determine if a machine can think in his 1950 paper entitled Computing Machinery and Intelligence. His idea assumed that a test to determine if a computer is intelligent can be made by only looking at external evidence. A judgment can be made based on the answers the computer gives to questions rather than the way it operates internally. This famous test is known as the Turing Test, although it was first called the “Imitation game”.
The Turing Test & Chinese Room
A judge is sitting in a room and can communicate through text messages with a person in the next room without seeing who is in it. In the next room a computer program receives the judge’s messages and generates answers accordingly. If the judge, relying only on the replies he gets from the room, cannot say if he is having a conversation with a real person or a computer, then the computer has passed the Turing test and is then reasonable to say that is intelligent and capable to think.
But the question is whether a chatbot that can pass the Turing test can think. The American philosopher John Searle proposed a very interesting answer to this issue. Searle believes that the main difference between a computer program and a human mind is that programs are defined purely syntactically, while minds are semantic. To have intentional mental states, a mind should associate a meaning to the information it processes. To explain this difference Searle came up with a thought experiment known as the “Chinese Room” argument.
Imagine a person sitting in a room with a letterbox. Sometimes a piece of paper with a squiggle enters the letterbox. The person’s task is to take the piece of paper with the squiggle on it, look it up on a reference book in the room, find in the book the squiggle it is paired to, pull the piece of paper with the paired squiggle from a set of baskets and send it back in the letterbox. After doing it for a while the person becomes quite good at doing this task. Now imagine that the person is English speaking only and the squiggles are Chinese characters. If the look-up book is well designed and the person is good at finding the matching squiggles and posting it back through the letterbox, from the point of view of an outside observer this person behaves exactly as if they understand Chinese and are sustaining a conversation without knowing a word of Chinese.
If a chatbot is so good at chatting in Chinese that its answers are indistinguishable from the ones of a native Chinese speaker, it could technically pass the Turing test. But because of Searle’s Chinese room argument, that doesn’t necessarily mean the bot is thinking and understanding Chinese.
Syntax and Semantics
In Searle ‘s view, running the appropriate computer program of Natural Language Processing is not enough to give the computer an understanding of that language. A computer program is provided with a set of rules and algorithms to manipulate the symbols defined as syntax but is not provided with a meaning for the symbols, or semantics. Understanding a language requires more than just manipulating a bunch of symbols. It requires that a meaning is linked to those symbols. If for example together with the Chinese symbols also English sentences were passed into the room, then an English-speaking person in the room would associate a meaning (semantics) of the English symbols to their formal structure (syntax) and will therefore understand the conversation they are carrying on. But if the symbols are only Chinese they are meaningless to them (no semantics).
This is how a computer program able to sustain a conversation using natural language behaves. No matter how intelligent a computer looks and no matter how sophisticated the program is that makes it behave that way, if the symbols it processes are meaningless (lack semantics) it is not actually thinking. Its internal states and processes, being purely syntactic don’t imply intentional mental states since syntax alone is not enough to become semantics.
It is therefore possible that the Chinese Room would pass the Turing test, even though it lacks understanding and intelligence. One criticism to Searle’s Chinese room argument is that the person in the Chinese room is just part of a system and even if he doesn’t understand Chinese, the whole system made of the room, the look-up book, the letterbox and so on does understand it. Searle’s answer to this criticism was that even if the look-up book is removed, and the person has memorized all the pairs of symbols, they still won’t understand their meaning. The fact that the person is giving the right answers does not mean that they are understanding what is going on.
So, is my chatbot thinking?
This debate can be pushed even further by questioning if our minds are just like the Chinese room and our brains are just like computer programs. I personally believe that the progress in Artificial Intelligence will soon bring some answers to the issue of whether a computer program can have conscious mental states and therefore be considered as a thinking being. Most modern chatbots have a weak AI, specialized in one narrow area and carry out a limited number of tasks very efficiently. They might seem intelligent in the field they have been designed and trained to operate, but completely inadequate in others. Most commercial chatbots don’t need a strong human-like intelligence to operate.
Maybe we will soon see the rise of strong AI, computers that will have cognitive capabilities like the human brain. According to Searle, brains cause minds and consequently the only way to produce a thinking computer is to replicate the human brain. Neural networks and deep learning methods, emulating the way neurons interact and create connections between each other’s seems to be pointing towards the right direction. One day when technology will allow us to run a neural network so wide to be comparable to the human brain, it is possible that chatbots will gain the ability to associate a meaning to the symbols they manipulate and will gain conscious thinking. Or maybe we will never know that because the kind of intelligence that computers will develop may be so different and exotic from ours that we won’t be able to recognize it, before it’s too late.
Originally posted here