فصل 39دوره: تاریخچه کوتاهی از فلسفه / درس 39
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
این درس را میتوانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
متن انگلیسی درس
Can Computers Think?
Alan Turing and John Searle
You’re sitting in a room. There is a door into the room with a letterbox. Every now and then a piece of card with a squiggle shape drawn on it comes through the door and drops on your doormat. Your task is to look up the squiggle in a book that is on the table in the room. Each squiggle is paired with another symbol in the book. You have to find your squiggle in the book, look at the symbol it is paired with, and then find a bit of card with a symbol that matches it from a pack in the room. You then carefully push that bit of card out through your letterbox. That’s it. You do this for a while and wonder what’s going on.
This is the Chinese Room thought experiment, the invention of the American philosopher John Searle (born 1932). It’s an imaginary situation designed to show that a computer can’t really think even if it seems to. In order to see what’s going on here you need to understand the Turing Test.
Alan Turing (1912–54) was an outstanding Cambridge mathematician who helped to invent the modern computer. His number-crunching machines built during the Second World War at Bletchley Park in England cracked the ‘Enigma’ codes used by German submarine commanders. The Allies could then intercept messages and know what the Nazis were planning.
Intrigued by the idea that one day computers might do more than crack codes, and could be genuinely intelligent, in 1950 he suggested a test that any such computer would have to pass. This has come to be known as the Turing Test for artificial intelligence but he originally called it the Imitation Game. It comes from his belief that what’s interesting about the brain isn’t that it has the consistency of cold porridge. Its function matters more than the way it wobbles when removed from the head, or the fact that it is grey. Computers may be hard and made from electronic components, but they can still do many things brains do.
When we judge whether a person is intelligent or not we do that based on the answers they give to questions rather than opening up their brains to look at how the neurons join up. So it’s only fair that when we judge computers we focus on external evidence rather than on how they are constructed. We should look at inputs and outputs, not the blood and nerves or the wiring and transistors inside. Here’s what Turing suggested. A tester is in one room, typing a conversation on to a screen. The tester doesn’t know whether he or she is having a conversation with another person in a different room via the screen – or with a computer generating its own answers. If during the conversation the tester can’t tell whether there is a person or a human being responding, the computer passes the Turing Test. If a computer passes that test then it is reasonable to say that it is intelligent – not just in a metaphorical way, but in the way that a human being can be.
What Searle’s Chinese Room example – the scenario with the squiggles on bits of card – is meant to show is that even if a computer passed Turing’s test for artificial intelligence that wouldn’t prove that it genuinely understood anything. Remember you are in this room with strange symbols coming through the letterbox and are passing other symbols back out through the letterbox, and you are guided by a rulebook. This is a meaningless task for you, and you have no idea why you are doing it. But without your realizing it, you are answering questions in Chinese. You only speak English and know no Chinese at all. But the signs coming in are questions in Chinese, and the signs you give out are plausible answers to those questions. The Chinese Room with you in it wins the Imitation Game. You give answers that would fool someone outside into thinking that you really understand what you are talking about. So, this suggests, a computer that passes the Turing Test isn’t necessarily intelligent, since from within the room you don’t have any sense of what’s being discussed at all.
Searle thinks that computers are like someone in the Chinese Room: they don’t really have intelligence and can’t really think. All they do is shuffle symbols around following rules that their makers have programmed into them. The processes they use are built into the software. But that is very different from truly understanding something or having genuine intelligence. Another way of putting this is that the people who program the computer give it a syntax: that is, they provide rules about the correct order in which to process the symbols. But they don’t provide it with a semantics: they don’t give meanings to the symbols. Human beings mean things when they speak – their thoughts relate in various ways to the world. Computers that seem to mean things are only imitating human thought, a bit like parrots. Although a parrot can mimic speech, it never really understands what it is saying. Similarly, according to Searle, computers don’t really understand or think about anything: you can’t get semantics from syntax alone.
A criticism of Searle’s thought experiment is that it looks at the question of whether or not the person in the room understands what’s going on. But that’s a mistake. The person is just a part of the whole system. Even if the person doesn’t understand what’s going on, perhaps the whole system (including the room, the code book, the symbols and so on) understands. Searle’s response to this objection was to change the thought experiment. Instead of imagining a person in a room shuffling symbols around, imagine this person has memorized the whole rulebook and then is outside in the middle of a field handing back the appropriate symbol cards. The person still wouldn’t understand the individual questions even though he or she would give the right answers to the questions asked in Chinese. Understanding involves more than just giving the right answers.
Some philosophers, though, remain convinced that the human mind is just like a computer program. They believe that computers really can and do think. If they’re right, then perhaps one day it will be possible to transfer minds from people’s brains into computers. If your mind is a program, then just because it is running in the soggy mass of brain tissue in your head now doesn’t mean that it couldn’t run in a big shiny computer somewhere else in the future. If, with the help of super-intelligent computers, someone manages to map the billions of functional connections that make up your mind, then perhaps one day it will be possible to survive death. Your mind could be uploaded into a computer so that it could carry on working long after your body had been buried or cremated. Whether that would be a good way to exist is another question. If Searle is right, though, there would be no guarantee that the uploaded mind would be conscious in the way that you are now, even if it gave responses that seemed to show that it was.
Writing over sixty years ago, Turing was already convinced that computers could think. If he was right, it might not be that long before we find them thinking about philosophy. That’s more likely than that they will allow our minds to survive death. Perhaps one day computers will even have interesting things to say about the fundamental questions of how we should live and about the nature of reality – the sorts of questions that philosophers have grappled with for several thousand years. In the meantime, though, we need to rely on flesh and blood philosophers to clarify our thinking in these areas. One of the most influential and controversial of these is Peter Singer.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.