Real Intelligence

Can a digital machine ever be truly intelligent? And if not, could it be an emergent property of a complex system?

Real Intelligence

Can a digital machine ever be truly intelligent? And if not, could it be an emergent property of a complex system?

No one is surprised when a computer is good at math. They can whiz through whole load of complex calculations in far less time than your or I— but, for some reason, we don’t consider that intelligence.

On the other hand, we do think it intelligent when a computer appears to understand human language.

After all, if a computer can understand words, phrases and sentences, if it can read my emails and throw away the spam, aren’t they displaying some sort of intelligence? They must be pretty clever, right?

Well…not quite.


It isn’t that the computers actually understand the emails. It’s that ingenious engineers, programmers, and statisticians have found a way of translating a natural language problem into a numerical one.

It’s a case of classification — is an email spam or is it not spam? Humans can usually recognise spam fairly easily: it’s an unsolicited email that’s usually trying to get you to part with some money or information. So how do the spam filters used by your email provider do it?

Here’s what they don’t do: they don’t read and understand it in any meaningful way.

To start with, they’re given thousands of emails that have been manually classified as spam, or not spam. They break up these emails into individual words and create a statistical model of a spam email based on the words it contains. Incoming email is similarly analysed, and then compared to the model. The probability of it being spam is calculated — if the probability is above a certain threshold, then the email is marked as spam.

Identifying words in a text is not difficult, because, to a computer program, letters and others characters are simply numerical codes. So it’s easy to identify spaces and punctuation marks in a message. The words are patterns of characters which don’t contain these codes.

So, far from understanding what a spam email is, the spam filter software deals with strings of numbers (the words) and uses numerical methods to tag an email as spam, or not spam.

This is not to denigrate the humble spam filter. It is ingenious, but it is also the same kind of calculation that computers have always been used for.


But what about machine translation? While not perfect by any means, it has progressed enormously over the last few years. So this, surely, must be an example of real machine intelligence.

Well, the thing is, when humans translate from one language to another, they understand the meaning of a text, and then use their knowledge of another language to express that meaning in the second language.

These days, computer translation is often Statistical Machine Translation (STM). While multi-lingual dictionaries can be used to look up words, STM relies on having a corpus of thousands of documents that are already translated in order to handle sentences, and even full documents. By using statistics and probability, it finds a likely match for the text to be translated, and the equivalent in the target language.

So, computers are not yet intelligent in the way we are. They are only lightning fast calculating machines, as they have always been, except that they are now faster and have more data to work with than ever before.


But can we imagine a future when they will be as clever as us?

It seems to be accepted by most commentators that, in the not too distant future, AIs will become as intelligent as, and then outstrip, their human inventors. But there is at least one school of thought that maintains that a computer running a program can never be regarded as truly intelligent.

The Chinese Room is a thought experiment formulated by the philosopher, John Searle, that attempts to demonstrate that computers running programs cannot be intelligent. It proposes that there is a computer program that can accept messages written in Chinese and make appropriate and correct responses, also in Chinese. This system passes the Turing Test, in that the responses given by the computer are indistinguishable from those that would be given by a human. The system, therefore, appears to be intelligent.

But Searle argues that it isn’t.

He asks us to imagine a man, who understands no Chinese, in a sealed room with a listing of the program that the computer is running — he also given an abundant supply of pencils and paper. The man receives messages in Chinese through a slot in the wall of the room and uses the program listing to work out how to deal with the message and compose a response. In this way, he produces the same result as the computer.

However, the man in the room is not making intelligent responses to the Chinese messages. He cannot, because he does not understand the language. He is simply, and mindlessly, following a set of instructions, just like a computer.

Searle’s contention is that any system that functions by following a computer program — and that means all of the computer systems that we have ever created — does not act in an intelligent way; it is following a set of instructions and so can, at best, only be simulating intelligence.


There have been arguments against this idea.

The biggest one is that while the man in the room is not behaving intelligently and does not understand Chinese, the room as a whole is intelligent and does understand.

Searle has a simple response to this: imagine that the man in the room can internalise the computer program, so instead of reading from paper, he can memorise the instructions. Now, we have a man who doesn’t understand Chinese but who can, through an immense feat of memory, read messages and respond appropriately in that language. The man is still not acting intelligently, he is still simply following a set of instructions.

But what would really happen under those circumstances? Perhaps the man would begin to learn Chinese. He would begin to remember certain symbols and the correct actions to take. Now he is in the outside world, he would begin to see the context in which those symbols are produced and begin to understand those symbols. He would be learning Chinese the way a child does.

This will happen because the man’s brain contains more than just the memory of the computer program: there are many other things going on. He will have a memory of the things he has done in the past, the symbols that he has seen before, and of the responses that he has made. He will make associations between the actions he is taking, and the outside world. Through the experience of ‘running’ the Chinese Room program he will be learning.


What is so special about a biological brain that it enables us to learn? And can we build a computer that would do the same job?

The human brain is not a computer running a program. Unlike a computer, it has many ‘programs’ running at the same time, and they interact with and modify each other and themselves. The neurons in the brain are forever making and remaking connections. Everything we do changes the structure of our brain.

In the case of the man who is now outside of the Chinese room, the symbols that form part of the original program become associated with other concepts such as cars, trees or elephants. And, as he becomes more familiar with the language, he will find shortcuts, parts of the program that he no longer needs, because of the way that he has gained knowledge from experience.

Eventually, the memory of the original program will be redundant because he will have actually learned Chinese.

A computer running a single program may never be intelligent. However, perhaps it is possible that reasoning and intelligent behaviour are emergent properties of a system that more closely mimics the complex way in which the brain works.

I guess we’ll know when a computer starts talking to us in Chinese.