سرفصل های مهم
فاتحان
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
Conquerors
Although we’ve now explored a wide range of future scenarios, they all have something in common: there are (at least some) happy humans remaining. AIs leave humans in peace either because they want to or because they’re forced to. Unfortunately for humanity, this isn’t the only option. Let us now explore the scenario where one or more AIs conquer and kill all humans. This raises two immediate questions: Why and how?
Why and How?
Why would a conqueror AI do this? Its reasons might be too complicated for us to understand, or rather straightforward. For example, it may view us as a threat, nuisance or waste of resources. Even if it doesn’t mind us humans per se, it may feel threatened by our keeping thousands of hydrogen bombs on hair-trigger alert and bumbling along with a never-ending series of mishaps that could trigger their accidental use. It may disapprove of our reckless planet management, causing what Elizabeth Kolbert calls “the sixth extinction” in her book of that title—the greatest mass-extinction event since that dinosaur-killing asteroid struck Earth 66 million years ago. Or it may decide that there are so many humans willing to fight an AI takeover that it’s not worth taking chances.
How would a conqueror AI eliminate us? Probably by a method that we wouldn’t even understand, at least not until it was too late. Imagine a group of elephants 100,000 years ago discussing whether those recently evolved humans might one day use their intelligence to kill their entire species. “We don’t threaten humans, so why would they kill us?” they might wonder. Would they ever guess that we would smuggle tusks across Earth and carve them into status symbols for sale, even though functionally superior plastic materials are much cheaper? A conqueror AI’s reason for eliminating humanity in the future may seem equally inscrutable to us. “And how could they possibly kill us, since they’re so much smaller and weaker?” the elephants might ask. Would they guess that we’d invent technology to remove their habitats, poison their drinking water and cause metal bullets to pierce their heads at supersonic speeds?
Scenarios where humans can survive and defeat AIs have been popularized by unrealistic Hollywood movies such as the Terminator series, where the AIs aren’t significantly smarter than humans. When the intelligence differential is large enough, you get not a battle but a slaughter. So far, we humans have driven eight out of eleven elephant species extinct, and killed off the vast majority of the remaining three. If all world governments made a coordinated effort to exterminate the remaining elephants, it would be relatively quick and easy. I think we can confidently rest assured that if a superintelligent AI decides to exterminate humanity, it will be even quicker.
How Bad Would It Be?
How bad would it be if 90% of humans get killed? How much worse would it be if 100% get killed? Although it’s tempting to answer the second question with “10% worse,” this is clearly inaccurate from a cosmic perspective: the victims of human extinction wouldn’t be merely everyone alive at the time, but also all descendants that would otherwise have lived in the future, perhaps during billions of years on billions of trillions of planets. On the other hand, human extinction might be viewed as somewhat less horrible by religions according to which humans go to heaven anyway, and there isn’t much emphasis on billion-year futures and cosmic settlements.
Most people I know cringe at the thought of human extinction, regardless of religious persuasion. Some, however, are so incensed by the way we treat people and other living beings that they hope we’ll get replaced by some more intelligent and deserving life form. In the movie The Matrix, Agent Smith (an AI) articulates this sentiment: “Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment but you humans do not. You move to an area and you multiply and multiply until every natural resource is consumed and the only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet. You are a plague and we are the cure.” But would a fresh roll of the dice necessarily be better? A civilization isn’t necessarily superior in any ethical or utilitarian sense just because it’s more powerful. “Might makes right” arguments to the effect that stronger is always better have largely fallen from grace these days, being widely associated with fascism. Indeed, although it’s possible that the conqueror AIs may create a civilization whose goals we would view as sophisticated, interesting and worthy, it’s also possible that their goals will turn out to be pathetically banal, such as maximizing the production of paper clips.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.