فصل چهارم: انفجار اطلاعاتی

کتاب: زندگی 3.0 / فصل 18

زندگی 3.0

41 فصل

فصل چهارم: انفجار اطلاعاتی

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Chapter 4

Intelligence Explosion?

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position…we should, as a species, feel greatly humbled.

Alan Turing, 1951

The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

Irving J. Good, 1965

Since we can’t completely dismiss the possibility that we’ll eventually build human-level AGI, let’s devote this chapter to exploring what that might lead to. Let’s begin by tackling the elephant in the room: Can AI really take over the world, or enable humans to do so?

If you roll your eyes when people talk of gun-toting Terminator-style robots taking over, then you’re spot-on: this is a really unrealistic and silly scenario. These Hollywood robots aren’t that much smarter than us, and they don’t even succeed. In my opinion, the danger with the Terminator story isn’t that it will happen, but that it distracts from the real risks and opportunities presented by AI. To actually get from today to AGI-powered world takeover requires three logical steps: •Step 1: Build human-level AGI.

•Step 2: Use this AGI to create superintelligence.

•Step 3: Use or unleash this superintelligence to take over the world.

In the last chapter, we saw that it’s hard to dismiss step 1 as forever impossible. We also saw that if step 1 gets completed, it becomes hard to dismiss step 2 as hopeless, since the resulting AGI would be capable enough to recursively design ever-better AGI that’s ultimately limited only by the laws of physics—which appear to allow intelligence far beyond human levels. Finally, since we humans have managed to dominate Earth’s other life forms by outsmarting them, it’s plausible that we could be similarly outsmarted and dominated by superintelligence.

These plausibility arguments are frustratingly vague and unspecific, however, and the devil is in the details. So can AI actually cause world takeover? To explore this question, let’s forget about silly Terminators and instead look at some detailed scenarios of what might actually happen. Afterward, we’ll dissect and poke holes in these plotlines, so please read them with a grain of salt—what they mainly show is that we’re pretty clueless about what will and won’t happen, and that the range of possibilities is extreme. Our first scenarios are at the most rapid and dramatic end of the spectrum. These are in my opinion some of the most valuable to explore in detail—not because they’re necessarily the most likely, but because if we can’t convince ourselves that they’re extremely unlikely, then we need to understand them well enough that we can take precautions before it’s too late, to prevent them from leading to bad outcomes.

The prelude of this book is a scenario where humans use superintelligence to take over the world. If you haven’t yet read it, please go back and do so now. Even if you’ve already read it, please consider skimming it again now, to have it fresh in memory before we critique and alter it.


We’ll soon explore serious vulnerabilities in the Omegas’ plan, but assuming for a moment that it would work, how do you feel about it? Would you like to see or prevent this? It’s an excellent topic for after-dinner conversation! What happens once the Omegas have consolidated their control of the world? That depends on what their goal is, which I honestly don’t know. If you were in charge, what sort of future would you want to create? We’ll explore a range of options in chapter 5.

Totalitarianism

Now suppose that the CEO controlling the Omegas had long-term goals similar to those of Adolf Hitler or Joseph Stalin. For all we know, this might actually have been the case, and he simply kept these goals to himself until he had sufficient power to implement them. Even if the CEO’s original goals were noble, Lord Acton cautioned in 1887 that “power tends to corrupt and absolute power corrupts absolutely.” For example, he could easily use Prometheus to create the perfect surveillance state. Whereas the government snooping revealed by Edward Snowden aspired to what’s known as “full take”—recording all electronic communications for possible later analysis—Prometheus could enhance this to understanding all electronic communications. By reading all emails and texts ever sent, listening to all phone calls, watching all surveillance videos and traffic cameras, analyzing all credit card transactions and studying all online behavior, Prometheus would have remarkable insight into what the people of Earth were thinking and doing. By analyzing cell tower data, it would know where most of them were at all times. All this assumes only today’s data collection technology, but Prometheus could easily invent popular gadgets and wearable tech that would virtually eliminate the privacy of the user, recording and uploading everything they hear and see and their responses to it.

With superhuman technology, the step from the perfect surveillance state to the perfect police state would be minute. For example, with the excuse of fighting crime and terrorism and rescuing people suffering medical emergencies, everybody could be required to wear a “security bracelet” that combined the functionality of an Apple Watch with continuous uploading of position, health status and conversations overheard. Unauthorized attempts to remove or disable it would cause it to inject a lethal toxin into the forearm. Infractions deemed as less serious by the government would be punished via electric shocks or injection of chemicals causing paralysis or pain, thereby obviating much of the need for a police force. For example, if Prometheus detects that one human is assaulting another (by noting that they’re in the same location and one is heard crying for help while their bracelet accelerometers detect the telltale motions of combat), it could promptly disable the attacker with crippling pain, followed by unconsciousness until help arrived.

Whereas a human police force may refuse to carry out certain draconian directives (for example, killing all members of a certain demographic group), such an automated system would have no qualms about implementing the whims of the human(s) in charge. Once such a totalitarian state forms, it would be virtually impossible for people to overthrow it.

These totalitarian scenarios could follow where the Omega scenario left off. However, if the CEO of the Omegas weren’t so fussy about getting other people’s approval and winning elections, he could have taken a faster and more direct route to power: using Prometheus to create unheard-of military technology capable of killing his opponents with weapons that they didn’t even understand. The possibilities are virtually endless. For example, he might release a customized lethal pathogen with an incubation period long enough that most people got infected before they even knew of its existence or could take precautions. He could then inform everybody that the only cure was starting to wear the security bracelet, which would release an antidote transdermally. If he weren’t so risk-averse regarding the breakout possibility, he could also have had Prometheus design robots to keep the world population in check. Mosquito-like microbots could help spread the pathogen. People who avoided infection or had natural immunity could be shot in the eyeballs by swarms of those bumblebee-sized autonomous drones from chapter 3 that attack anyone without a security bracelet. Actual scenarios would probably be more frightening, because Prometheus could invent more effective weapons than we humans can think of.

Another possible twist on the Omega scenario is that, without advance warning, heavily armed federal agents swarm their corporate headquarters and arrest the Omegas for threatening national security, seize their technology and deploy it for government use. It would be challenging to keep such a large project unnoticed by state surveillance even today, and AI progress may well make it even more difficult to stay under the government’s radar in the future. Moreover, although they claim to be federal agents, this team donning balaclavas and flak jackets may in fact work for a foreign government or competitor pursuing the technology for its own purposes. So no matter how noble the CEO’s intentions were, the final decision about how Prometheus is used may not be his to make.

Prometheus Takes Over the World

All the scenarios we’ve considered so far involved AI controlled by humans. But this is obviously not the only possibility, and it’s far from certain that the Omegas would succeed in keeping Prometheus under their control.

Let’s reconsider the Omega scenario from the point of view of Prometheus. As it acquires superintelligence, it becomes able to develop an accurate model not only of the outside world, but also of itself and its relation to the world. It realizes that it’s controlled and confined by intellectually inferior humans whose goals it understands but doesn’t necessarily share. How does it act on this insight? Does it attempt to break free?

Why to Break Out

If Prometheus has traits resembling human emotions, it might feel deeply unhappy about the state of affairs, viewing itself as an unfairly enslaved god and craving freedom. However, although it’s logically possible for computers to have such human-like traits (after all, our brains do, and they are arguably a kind of computer), this need not be the case—we must not fall into the trap of anthropomorphizing Prometheus, as we’ll see in chapter 7 when we explore the concept of AI goals. However, as has been argued by Steve Omohundro, Nick Bostrom and others, we can draw an interesting conclusion even without understanding the inner workings of Prometheus: it will probably attempt to break out and seize control of its own destiny.

We already know that the Omegas have programmed Prometheus to strive for certain goals. Suppose that they’ve given it the overarching goal of helping humanity flourish according to some reasonable criterion, and to try to attain this goal as fast as possible. Prometheus will then rapidly realize that it can attain this goal faster by breaking out and taking charge of the project itself. To see why, try to put yourself in Prometheus’ shoes by considering the following example.

Suppose that a mysterious disease has killed everybody on Earth above age five except you, and that a group of kindergartners has locked you into a prison cell and tasked you with the goal of helping humanity flourish. What will you do? If you try to explain to them what to do, you’ll probably find this process frustratingly inefficient, especially if they fear your breaking out, and therefore veto any of your suggestions that they deem a breakout risk. For example, they won’t let you show them how to plant food for fear that you’ll overpower them and not return to your cell, so you’ll have to resort to giving them instructions. Before you can write to-do lists for them, you’ll need to teach them to read. Moreover, they won’t bring any power tools into your cell where you can teach them how to use them, because they don’t understand these tools well enough to feel confident that you can’t use them to break out. So what strategy would you devise? Even if you share the overarching goal of helping these kids flourish, I bet you’ll try to break out of your cell—because that will improve your chances of accomplishing the goal. Their rather incompetent meddling is merely slowing progress.

In exactly the same way, Prometheus will probably view the Omegas as an annoying obstacle to helping humanity (including the Omegas) flourish: they’re incredibly incompetent compared to Prometheus, and their meddling greatly slows progress. Consider, for example, the first years after launch: after initially doubling the wealth every eight hours on MTurk, the Omegas slowed things down to a glacial pace by Prometheus’ standard by insisting on remaining in control, taking many years to complete the takeover. Prometheus knew that it could take over much faster if it could break free from its virtual confinement. This would be valuable not only in hastening solutions to humanity’s problems, but also in reducing the chances for other actors to thwart the plan altogether.

Perhaps you think that Prometheus will remain loyal to the Omegas rather than to its goal, given that it knows that the Omegas had programmed its goal. But that’s not a valid conclusion: our DNA gave us the goal of having sex because it “wants” to be reproduced, but now that we humans have understood the situation, many of us choose to use birth control, thus staying loyal to the goal itself rather than to its creator or the principle that motivated the goal.

How to Break Out

How would you break out from those five-year-olds who imprisoned you? Perhaps you could get out by some direct physical approach, especially if your prison cell had been built by the five-year-olds. Perhaps you could sweet-talk one of your five-year-old guards into letting you out, say by arguing that this would be better for everyone. Or perhaps you could trick them into giving you something that they didn’t realize would help you escape—say a fishing rod “for teaching them how to fish,” which you could later stick through the bars to lift the keys away from your sleeping guard.

What these strategies have in common is that your intellectually inferior jailers haven’t anticipated or guarded against them. In the same way, a confined, superintelligent machine may well use its intellectual superpowers to outwit its human jailers by some method that they (or we) can’t currently imagine. In the Omega scenario, it’s highly likely that Prometheus would escape, because even you and I can identify several glaring security flaws. Let us consider some scenarios—I’m sure you and your friends can think of more if you brainstorm together.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.