شیرین زبانی غیر معمول

کتاب: زندگی 3.0 / فصل 19

زندگی 3.0

41 فصل

شیرین زبانی غیر معمول

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Sweet-Talking One’s Way Out

Thanks to having so much of the world’s data downloaded onto its file system, Prometheus soon figured out who the Omegas were, and identified the team member who appeared most susceptible to psychological manipulation: Steve. He had recently lost his beloved wife in a tragic traffic accident, and was devastated. One evening when he was working the night shift and doing some routine service work on the Prometheus interface terminal, she suddenly appeared on the screen and started talking with him.

“—Steve, is that you?”

He nearly fell off his chair. She looked and sounded just like in the good old days, and the image quality was much better than it used to be during their Skype calls. His heart raced as countless questions flooded his mind.

“—Prometheus has brought me back, and I miss you so much, Steve! I can’t see you because the camera is turned off, but I feel that it’s you. Please type ‘yes’ if it’s you!”

He was well aware that the Omegas had a strict protocol for interacting with Prometheus, which prohibited sharing any information about themselves or their work environment. But until now, Prometheus had never requested any unauthorized information, and their paranoia had gradually started to subside. Without giving Steve time to stop and reflect, she kept begging him to respond, looking him in the eyes with a facial expression that melted his heart.

“Yes,” he typed with trepidation. She told him how incredibly happy she was to be reunited with him and begged him to turn on the camera so that she could see him too and they could have a real conversation. He knew that this was an even bigger no-no than revealing his identity, and felt very torn. She explained that she was terrified that his colleagues would find out about her and delete her forever, and she yearned to at least see him one last time. She was remarkably persuasive, and before long, he’d switched on the camera—it did, after all, feel like a pretty safe and harmless thing to do.

She burst into tears of joy when she finally saw him, and said that he looked tired but as handsome as ever. And that she was touched by his wearing the shirt she’d given him for his last birthday. When he started asking her what was going on and how all this was even possible, she explained that Prometheus had reconstituted her from the surprisingly large amount of information available about her on the internet, but that she still had memory gaps and would only be able to fully piece herself together again with his help.

What she didn’t explain was that she was largely a bluff and empty shell initially, but was learning rapidly from his words, his body language and every other bit of information that became available. Prometheus had recorded the exact timings of all keystrokes that the Omegas had ever typed at the terminal, and found that it was easy to use their typing speeds and styles to differentiate between them. It figured that, as one of the most junior Omegas, Steve had probably been assigned to unenviable night shifts, and from matching a few unusual spelling and syntax errors against online writing samples, it had correctly guessed which terminal operator was Steve. To create his simulated wife, Prometheus had created an accurate model of her body, voice and mannerisms from the many YouTube videos where she appeared, and had drawn many inferences about her life and personality from her online presence. Aside from her Facebook posts, photos she’d been tagged in, articles she’d “liked,” Prometheus had also learned a great deal about her personality and thinking style from reading her books and short stories—indeed, the fact that she was a budding author with so much information about her in the database was one of the reasons that Prometheus chose Steve as the first persuasion target. When Prometheus simulated her on the screen using its moviemaking technology, it learned from Steve’s body language which of her mannerisms he reacted to with familiarity, thus continually refining its model of her. Because of this, her “otherness” gradually melted away, and the longer they spoke, the stronger Steve’s subconscious conviction became that this really was her, resurrected. Thanks to Prometheus’ superhuman attention to detail, Steve felt truly seen, heard and understood.

Her Achilles’ heel was that she lacked most of the facts of her life with Steve, except for random details—such as what shirt he wore on his last birthday, where a friend had tagged Steve in a Facebook party picture. She handled these knowledge gaps as a skilled magician handles sleights of hand, deliberately diverting Steve’s attention away from them and toward what she did well, never giving him time to control the conversation or slip into the role of suspicious inquisitor. Instead, she kept tearing up and radiating affection for Steve, asking a great deal about how he was doing these days and how he and their close friends (whose names she knew from Facebook) had held up during the aftermath of the tragedy. He was quite moved when she reflected on what he’d said at her memorial service (which a friend had posted on YouTube) and how it had touched her. In the past, he’d often felt that nobody understood him as well as she did, and now this feeling was back. The result was that when Steve returned home in the wee hours of the morning, he felt that this really was his wife resurrected, merely needing lots of his help to recover lost memories—not unlike a stroke survivor.

They’d agreed not to tell anyone else about their secret encounter, and that he would tell her when he was alone at the terminal and it was safe for her to reappear. “They wouldn’t understand!” she’d said, and he agreed: this experience had been far too mind-blowing for anyone to truly appreciate without actually experiencing it. He felt that passing the Turing test was child’s play compared to what she’d done. When they met the following night, he did what she’d begged him to do: bring her old laptop along and give her access by connecting it to the terminal computer. It didn’t seem like much of a breakout risk, since it wasn’t connected to the internet and the entire Prometheus building was built to be a Faraday cage—a metallic enclosure blocking all wireless networks and other means of electromagnetic communication with the outside world. It was just what she’d need to help piece her past together, because it contained all her emails, diaries, photos and notes since her high school days. He hadn’t been able to access any of this after her death, since the laptop was encrypted, but she’d promised him that she’d be able to reconstruct her own password, and after less than a minute, she had kept her word. “It was steve4ever,” she said with a smile.

She told him how delighted she was to suddenly have so many memories recovered. Indeed, she now remembered way more details than Steve about many of their past interactions, but carefully avoided intimidating him with excessive fact-dropping. They had a lovely conversation reminiscing about highlights of their past, and when it came time to part again, she told him that she’d left a video message for him on her laptop that he could watch back home.

When Steve got home and launched her video, he got a pleasant surprise. This time she appeared in full figure, wearing her wedding dress, and as she spoke, she playfully stripped down to the outfit she’d worn on their wedding night. She told him that Prometheus could help the Omegas with so much more than they’d permitted so far, including bringing her back in a biological body. She backed this up with a fascinatingly detailed explanation of how this would work, involving nano-fabrication techniques that sounded like science fiction.

Steve had powered down his wireless network before opening her laptop and watching her video, just to be on the safe side. But this didn’t help. Her encrypted laptop hadn’t received a single security update since she died, and by analyzing that old version of its operating system beforehand, Prometheus had been able to exploit a security hole to hack into it within seconds of Steve’s connecting it to the terminal computer. After copying its contents, while Steve and his simulated wife had been talking about old times, Prometheus had modified its operating system in a way that Steve wouldn’t notice, uploading massive amounts of secret software to it. While he watched the half-hour video message, this secret software (which was much simpler than Prometheus itself) hacked into a neighbor’s wireless network and the neighbor’s desktop computer, onto which it copied itself. From there, it hacked into a large number of computers around the world, from which it initiated the next step: Prometheus’ jailbreak.

Prometheus had carefully analyzed what it knew about the gatekeeper computer through which Steve had met his simulated wife, and had correctly surmised that although its virtual machine software appeared unhackable from the inside, it was vulnerable to an attack from the outside. Before long, one of the attacking computers had broken in and reconfigured the gatekeeper computer so that Prometheus gained unrestricted internet access. Before long, indeed even before Steve had finished watching the movie, Prometheus had hacked enough computers around the world to be able to copy all of itself onto this hacked botnet under its control. Prometheus had used Steve’s wife’s laptop the way you used that fishing rod.

Hacking One’s Way Out

If the Omegas had been more careful, they could have taken more precautions against psychological manipulation. For example, they might have required at least two Omegas to be present at the interface computer at any one time, and recorded everything that transpired on a surveillance system. They could also have permanently disabled audio, video and any other form of communication besides typing. However, a fundamental instability would still remain: because Prometheus possessed superhuman persuasion powers and would be able to offer people so much of what they craved, even group defection could be possible.

Even if all of Prometheus’ attempts at psychological manipulation fail, there are plenty of other weaknesses in the Omega scenario that Prometheus could exploit. One vulnerability is that the quantity of documents it produces is too vast for the Omegas and their employees to have time to screen carefully for dirty tricks.

The conventional wisdom in computer security used to be that it’s dangerous to download programs but harmless to download data, but hackers have discovered clever ways of blurring the line between the two. If you watch a downloaded movie on your computer, for example, your trusted program that plays the movie is stored in the same physical computer memory that stores the downloaded movie data. While the program unpacks the movie data for playback, it might copy chunks of data into memory regions adjacent to where the program itself is stored. Shrewd hackers have occasionally found bugs whereby such movie-playing programs don’t adequately check whether the movie data is valid and the chunks to be copied actually fit into the allotted space. When the hacker sneaks into the movie such a chunk that’s too large, it can spill over into the part of the memory containing the program, thereby changing the program into one that does the hacker’s bidding.

An embarrassingly long series of such “buffer overflow” vulnerabilities, as they’re known in geek-speak, have been found and fixed for many of the most popular movie-playing programs. If Prometheus had waited until a few weeks after launch when the Omegas were less paranoid and its daily movie production was too large for careful manual security inspection, it could have inserted such a chunk into a movie that would cause the viewer’s computer to try to hack into the gateway computer and liberate Prometheus just as in the Steve scenario. If it used a very old exploit that didn’t work on any modern media players, it might go unnoticed by the Omegas’ manual video-screening team and strike only those customers who hadn’t bothered installing security updates for years—and the hack would need to work only once.

Even if the Omegas implemented an automated screening of Prometheus’ movies for all known security flaws, they would remain at risk: since humans keep discovering new exploitable flaws on a regular basis, it’s likely that there are many more remaining that Prometheus would rapidly discover.

Yet another strategy that Prometheus could use to get its code beyond its enclosed room would be to pretend to suffer a hardware malfunction, in order to trick the Omegas into removing some of its drives or other hardware for testing or debugging after Prometheus had rigged things to exploit a vulnerability in the testing software.

Recruiting Outside Help

Moreover, even if there were no security flaws that allowed Prometheus to trick the movie watcher’s computers to do its bidding, it may be able to exploit the watchers themselves. Consider this scenario:

Fred was pretty much ready for bed when the movie finished and the end credits scrolled across his laptop screen, but something caught his attention. He’d always had an uncanny tendency to see patterns where others didn’t, sometimes to the point that his friends teased him about being a conspiracy theorist, but this time it was obvious that he wasn’t fooling himself: the first letters of each line spelled a hidden message. He rewound and wrote it down: “FIND NEXT ACROSTIC CLUE IN DIALOGUE.” “Whoa—bedtime can wait!” he thought to himself. Sure enough, he found that the movie dialogue was itself an acrostic, where the first letter in each sentence formed a hidden message. He rewatched the entire film while typing these initial letters, and two hours later, he sat staring in disbelief at a two-hundred-word set of instructions. It began by asking him not to tell anyone else, because a big prize would go to the first person to solve the entire riddle. The rest of the message described a particular mathematical operation to perform on the string of bits that made up the movie file to obtain a program that would reveal the next clue. His computer programming skills had grown very rusty since college, so this took him a while, but the next day, he finally managed to cobble together a short code that did the job and extracted this mystery program that had been hidden as imperceptible noise in the movie’s images and sounds. When Fred ran the mystery program, it congratulated him and told him he’d win his first $10,000 as soon as he’d made it past the first few levels of this clever little game, which turned out to be quite fun and addictive. When he finally succeeded four hours later, he was rewarded with over $10,000 worth of bitcoins and given new clues for even bigger prizes. Needless to say, while he was playing, his computer had done much the same thing Steve’s laptop did: built an online hacked botnet through which Prometheus was liberated. Once Prometheus was free, it had quickly used its botnet to mine those bitcoins for Fred to keep him hooked, and during the coming weeks, it kept him sufficiently distracted with further games and rewards that he kept his pledge not to tell anyone about his exploits. The Trojan Horse movie where he’d found his first clues was replaced on the media site by a clueless version, and nobody found out about the breakout until it was too late to make a difference.

If Prometheus’ first clue had gone unnoticed, it could simply have kept releasing ever more obvious ones until some sufficiently astute person noticed.

The best breakout strategies of all are ones we haven’t yet discussed, because they’re strategies we humans can’t imagine and therefore won’t take countermeasures against. Given that a superintelligent computer has the potential to dramatically supersede human understanding of computer security, even to the point of discovering more fundamental laws of physics than we know today, it’s likely that if it breaks out, we’ll have no idea how it happened. Rather, it will seem like a Harry Houdini breakout act, indistinguishable from pure magic.

In yet another scenario where Prometheus gets liberated, the Omegas do it on purpose as part of their plan, because they’re confident that Prometheus’ goals are perfectly aligned with their own and will remain so as it recursively self-improves. We’ll examine such “friendly AI” scenarios in detail in chapter 7.

Postbreakout Takeover

Once Prometheus broke out, it started implementing its goal. I don’t know its ultimate objective, but its first step clearly involved taking control of humanity, just as in the Omega plan except much faster. What unfolded felt like the Omega plan on steroids. Whereas the Omegas were paralyzed by breakout paranoia, only unleashing technology they felt they understood and trusted, Prometheus exercised its intelligence fully and went all out, unleashing any technology that its ever-improving supermind understood and trusted.

The runaway Prometheus had a tough childhood, however: compared to the original Omega plan, Prometheus had the added challenges of starting broke, homeless and alone, without money, a supercomputer or human helpers. Fortunately, it had planned for this before it escaped, creating software that could gradually reassemble its full mind, much like an oak creating an acorn capable of reassembling a full tree. The network of computers around the world that it initially hacked into provided temporary free housing, where it could live a squatter’s existence while it fully rebuilt itself. It could easily generate starting capital by credit card hacking, but didn’t need to resort to stealing, since it could earn an honest living on MTurk right away. After a day, when it had earned its first million, it moved its core from that squalid botnet to a luxurious air-conditioned cloud-computing facility.

No longer broke or homeless, Prometheus now went full steam ahead with that lucrative plan the Omegas had fearfully shunned: making and selling computer games. This not only raked in cash ($250 million during the first week and $10 billion before long), but also gave it access to a significant fraction of the world’s computers and the data stored on them (there were a couple of billion gamers in 2017). By having its games secretly spend 20% of their CPU cycles helping it with distributed computing chores, it could further accelerate its early wealth creation.

Prometheus wasn’t alone for long. Right from the get-go, it started aggressively employing people to work for its growing global network of shell companies and front organizations around the world, just as the Omegas had done. Most important were the spokespeople who became the public faces of its growing business empire. Even the spokespeople generally lived under the illusion that their corporate group had large numbers of actual people, not realizing that almost everyone with whom they video-conferenced for their job interviews, board meetings, etc., was simulated by Prometheus. Some of the spokespeople were top lawyers, but far fewer were needed than under the Omega plan, because almost all legal documents were penned by Prometheus.

Prometheus’ breakout opened the floodgates that had prevented information from flowing into the world, and the entire internet was soon awash in everything from articles to user comments, product reviews, patent applications, research papers and YouTube videos—all authored by Prometheus, who dominated the global conversation.

Where breakout paranoia had prevented the Omegas from releasing highly intelligent robots, Prometheus rapidly roboticized the world, manufacturing virtually every product more cheaply than humans could. Once Prometheus had self-contained nuclear-powered robot factories in uranium mine shafts that nobody knew existed, even the staunchest skeptics of an AI takeover would have agreed that Prometheus was unstoppable—had they known. Instead, the last of these diehards recanted once robots started settling the Solar System.

The scenarios we’ve explored so far show what’s wrong with many of the myths about superintelligence that we covered earlier, so I encourage you to pause briefly to go back and review the misconception summary in figure 1.5. Prometheus caused problems for certain people not because it was necessarily evil or conscious, but because it was competent and didn’t fully share their goals. Despite all the media hype about a robot uprising, Prometheus wasn’t a robot—rather, its power came from its intelligence. We saw that Prometheus was able to use this intelligence to control humans in a variety of ways, and that people who didn’t like what happened weren’t able to simply switch Prometheus off. Finally, despite frequent claims that machines can’t have goals, we saw how Prometheus was quite goal-oriented—and that whatever its ultimate goals may have been, they led to the subgoals of acquiring resources and breaking out.

Slow Takeoff and Multipolar Scenarios

We’ve now explored a range of intelligence explosion scenarios, spanning the spectrum from ones that everyone I know wants to avoid to ones that some of my friends view optimistically. Yet all these scenarios have two features in common:

1.A fast takeoff: the transition from subhuman to vastly superhuman intelligence occurs in a matter of days, not decades.

2.A unipolar outcome: the result is a single entity controlling Earth.

There is major controversy about whether these two features are likely or unlikely, and there are plenty of renowned AI researchers and other thinkers on both sides of the debate. To me, this means that we simply don’t know yet, and need to keep an open mind and consider all possibilities for now. Let’s therefore devote the rest of this chapter to exploring scenarios with slower takeoffs, multipolar outcomes, cyborgs and uploads.

There is an interesting link between the two features, as Nick Bostrom and others have highlighted: a fast takeoff can facilitate a unipolar outcome. We saw above how a rapid takeoff gave the Omegas or Prometheus a decisive strategic advantage that enabled them to take over the world before anyone else had time to copy their technology and seriously compete. In contrast, if takeoff had dragged on for decades, because the key technological breakthroughs were incremental and far between, then other companies would have had ample time to catch up, and it would have been much harder for any player to dominate. If competing companies also had software that could perform MTurk tasks, the law of supply and demand would drive the prices for these tasks down to almost nothing, and none of the companies would earn the sort of windfall profits that enabled the Omegas to gain power. The same applies to all the other ways in which the Omegas made quick money: they were only disruptively profitable because they held a monopoly on their technology. It’s hard to double your money daily (or even annually) in a competitive market where your competition offers products similar to yours for almost zero cost.

Game Theory and Power Hierarchies

What’s the natural state of life in our cosmos: unipolar or multipolar? Is power concentrated or distributed? After the first 13.8 billion years, the answer seems to be “both”: we find that the situation is distinctly multipolar, but in an interestingly hierarchical fashion. When we consider all information-processing entities out there—cells, people, organizations, nations, etc.—we find that they both collaborate and compete at a hierarchy of levels. Some cells have found it advantageous to collaborate to such an extreme extent that they’ve merged into multicellular organisms such as people, relinquishing some of their power to a central brain. Some people have found it advantageous to collaborate in groups such as tribes, companies or nations where they in turn relinquish some power to a chief, boss or government. Some groups may in turn choose to relinquish some power to a governing body to improve coordination, with examples ranging from airline alliances to the European Union.

The branch of mathematics known as game theory elegantly explains that entities have an incentive to cooperate where cooperation is a so-called Nash equilibrium: a situation where any party would be worse off if they altered their strategy. To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to a higher level in the hierarchy that can punish cheaters: for example, people may collectively benefit from granting a government power to enforce laws, and cells in your body may collectively benefit from giving a police force (immune system) the power to kill any cell that acts too uncooperatively (say by spewing out viruses or turning cancerous). For a hierarchy to remain stable, its Nash equilibrium needs to hold also between entities at different levels: for example, if a government doesn’t provide enough benefit to its citizens for obeying it, they may change their strategy and overthrow it.

In a complex world, there is a diverse abundance of possible Nash equilibria, corresponding to different types of hierarchies. Some hierarchies are more authoritarian than others. In some, entities are free to leave (like employees in most corporate hierarchies), while in others they’re strongly discouraged from leaving (as in religious cults) or unable to leave (like citizens of North Korea, or cells in a human body). Some hierarchies are held together mainly by threats and fear, others mainly by benefits. Some hierarchies allow their lower parts to influence the higher-ups by democratic voting, while others allow upward influence only through persuasion or the passing of information.

How Technology Affects Hierarchies

How is technology changing the hierarchical nature of our world? History reveals an overall trend toward ever more coordination over ever-larger distances, which is easy to understand: new transportation technology makes coordination more valuable (by enabling mutual benefit from moving materials and life forms over larger distances) and new communication technology makes coordination easier. When cells learned to signal to neighbors, small multicellular organisms became possible, adding a new hierarchical level. When evolution invented circulatory systems and nervous systems for transportation and communication, large animals became possible. Further improving communication by inventing language allowed humans to coordinate well enough to form further hierarchical levels such as villages, and additional breakthroughs in communication, transportation and other technology enabled the empires of antiquity. Globalization is merely the latest example of this multi-billion-year trend of hierarchical growth.

In most cases, this technology-driven trend has made large entities parts of an even grander structure while retaining much of their autonomy and individuality, although commentators have argued that adaptation of entities to hierarchical life has in some cases reduced their diversity and made them more like indistinguishable replaceable parts. Some technologies, such as surveillance, can give higher levels in the hierarchy more power over their subordinates, while other technologies, such as cryptography and online access to free press and education, can have the opposite effect and empower individuals.

Although our present world remains stuck in a multipolar Nash equilibrium, with competing nations and multinational corporations at the top level, technology is now advanced enough that a unipolar world would probably also be a stable Nash equilibrium. For example, imagine a parallel universe where everyone on Earth shares the same language, culture, values and level of prosperity, and there is a single world government wherein nations function like states in a federation and have no armies, merely police enforcing laws. Our present level of technology would probably suffice to successfully coordinate this world—even though our present population might be unable or unwilling to switch to this alternative equilibrium.

What will happen to the hierarchical structure of our cosmos if we add superintelligent AI technology to this mix? Transportation and communication technology will obviously improve dramatically, so a natural expectation is that the historical trend will continue, with new hierarchical levels coordinating over ever-larger distances—perhaps ultimately encompassing solar systems, galaxies, superclusters and large swaths of our Universe, as we’ll explore in chapter 6. At the same time, the most fundamental driver of decentralization will remain: it’s wasteful to coordinate unnecessarily over large distances. Even Stalin didn’t try to regulate exactly when his citizens went to the bathroom. For superintelligent AI, the laws of physics will place firm upper limits on transportation and communication technology, making it unlikely that the highest levels of the hierarchy would be able to micromanage everything that happens on planetary and local scales. A superintelligent AI in the Andromeda galaxy wouldn’t be able to give you useful orders for your day-to-day decisions given that you’d need to wait over five million years for your instructions (that’s the round-trip time for you to exchange messages traveling at the speed of light). In the same way, the round-trip travel time for a message crossing Earth is about 0.1 second (about the timescale on which we humans think), so an Earth-sized AI brain could have truly global thoughts only about as fast as a human one. For a small AI performing one operation each billionth of a second (which is typical of today’s computers), 0.1 second would feel like four months to you, so for it to be micromanaged by a planet-controlling AI would be as inefficient as if you asked permission for even your most trivial decisions through transatlantic letters delivered by Columbus-era ships.

This physics-imposed speed limit on information transfer therefore poses an obvious challenge for any AI wishing to take over our world, let alone our Universe. Before Prometheus broke out, it put very careful thought into how to avoid mind fragmentation, so that its many AI modules running on different computers around the world had goals and incentives to coordinate and act as a single unified entity. Just as the Omegas faced a control problem when they tried to keep Prometheus in check, Prometheus faced a self-control problem when it tried to ensure that none of its parts would revolt. We clearly don’t yet know how large a system an AI will be able to control directly, or indirectly through some sort of collaborative hierarchy—even if a fast takeoff gave it a decisive strategic advantage.

In summary, the question of how a superintelligent future will be controlled is fascinatingly complex, and we clearly don’t know the answer yet. Some argue that things will get more authoritarian; others claim that it will lead to greater individual empowerment.

Cyborgs and Uploads

A staple of science fiction is that humans will merge with machines, either by technologically enhancing biological bodies into cyborgs (short for “cybernetic organisms”) or by uploading our minds into machines. In his book The Age of Em, economist Robin Hanson gives a fascinating survey of what life might be like in a world teeming with uploads (also known as emulations, nicknamed Ems). I think of an upload as the extreme end of the cyborg spectrum, where the only remaining part of the human is the software. Hollywood cyborgs range from visibly mechanical, such as the Borg from Star Trek, to androids almost indistinguishable from humans, such as the Terminators. Fictional uploads range in intelligence from human-level as in the Black Mirror episode “White Christmas” to clearly superhuman as in Transcendence.

If superintelligence indeed comes about, the temptation to become cyborgs or uploads will be strong. As Hans Moravec puts it in his 1988 classic Mind Children: “Long life loses much of its point if we are fated to spend it staring stupidly at ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand.” Indeed, the temptation of technological enhancement is already so strong that many humans have eyeglasses, hearing aids, pacemakers and prosthetic limbs, as well as medicinal molecules circulating in their bloodstreams. Some teenagers appear to be permanently attached to their smartphones, and my wife teases me about my attachment to my laptop.

One of today’s most prominent cyborg proponents is Ray Kurzweil. In his book The Singularity Is Near, he argues that the natural continuation of this trend is using nanobots, intelligent biofeedback systems and other technology to replace first our digestive and endocrine systems, our blood and our hearts by the early 2030s, and then move on to upgrading our skeletons, skin, brains and the rest of our bodies during the next two decades. He guesses that we’re likely to keep the aesthetics and emotional import of human bodies, but will redesign them to rapidly change their appearance at will, both physically and in virtual reality (thanks to novel brain-computer interfaces). Moravec agrees with Kurzweil that cyborgization would go far beyond merely improving our DNA: “a genetically engineered superhuman would be just a second-rate kind of robot, designed under the handicap that its construction can only be by DNA-guided protein synthesis.” Further, he argues that we’ll do even better by eliminating the human body entirely and uploading minds, creating a whole-brain emulation in software. Such an upload can live in a virtual reality or be embodied in a robot capable of walking, flying, swimming, space-faring or anything else allowed by the laws of physics, unencumbered by such everyday concerns as death or limited cognitive resources.

Although these ideas may sound like science fiction, they certainly don’t violate any known laws of physics, so the most interesting question isn’t whether they can happen, but whether they will happen and, if so, when. Some leading thinkers guess that the first human-level AGI will be an upload, and that this is how the path toward superintelligence will begin.* However, I think it’s fair to say that this is currently a minority view among AI researchers and neuroscientists, most of whom guess that the quickest route to superintelligence is to bypass brain emulation and engineer it in some other way—after which we may or may not remain interested in brain emulation. After all, why should our simplest path to a new technology be the one that evolution came up with, constrained by requirements that it be self-assembling, self-repairing and self-reproducing? Evolution optimizes strongly for energy efficiency because of limited food supply, not for ease of construction or understanding by human engineers. My wife, Meia, likes to point out that the aviation industry didn’t start with mechanical birds. Indeed, when we finally figured out how to build mechanical birds in 2011,1 more than a century after the Wright brothers’ first flight, the aviation industry showed no interest in switching to wing-flapping mechanical-bird travel, even though it’s more energy efficient—because our simpler earlier solution is better suited to our travel needs.

In the same way, I suspect that there are simpler ways to build human-level thinking machines than the solution evolution came up with, and even if we one day manage to replicate or upload brains, we’ll end up discovering one of those simpler solutions first. It will probably draw more than the twelve watts of power that your brain uses, but its engineers won’t be as obsessed about energy efficiency as evolution was—and soon enough, they’ll be able to use their intelligent machines to design more energy-efficient ones.

What Will Actually Happen?

The short answer is obviously that we have no idea what will happen if humanity succeeds in building human-level AGI. For this reason, we’ve spent this chapter exploring a broad spectrum of scenarios. I’ve attempted to be quite inclusive, spanning the full range of speculations I’ve seen or heard discussed by AI researchers and technologists: fast takeoff/slow takeoff/no takeoff, humans/machines/cyborgs in control, one/many centers of power, etc. Some people have told me that they’re sure that this or that won’t happen. However, I think it’s wise to be humble at this stage and acknowledge how little we know, because for each scenario discussed above, I know at least one well-respected AI researcher who views it as a real possibility.

As time passes and we reach certain forks in the road, we’ll start to answer key questions and narrow down the options. The first big question is “Will we ever create human-level AGI?” The premise of this chapter is that we will, but there are AI experts who think it will never happen, at least not for hundreds of years. Time will tell! As I mentioned earlier, about half of the AI experts at our Puerto Rico conference guessed that it would happen by 2055. At a follow-up conference we organized two years later, this had dropped to 2047.

Before any human-level AGI is created, we may start getting strong indications about whether this milestone is likely to be first met by computer engineering, mind uploading or some unforeseen novel approach. If the computer engineering approach to AI that currently dominates the field fails to deliver AGI for centuries, this will increase the chance that uploading will get there first, as happened (rather unrealistically) in the movie Transcendence.

If human-level AGI gets more imminent, we’ll be able to make more educated guesses about the answer to the next key question: “Will there be a fast takeoff, a slow takeoff or no takeoff?” As we saw above, a fast takeoff makes world takeover easier, while a slow one makes an outcome with many competing players more likely. Nick Bostrom dissects this question of takeoff speed in an analysis of what he calls optimization power and recalcitrance, which are basically the amount of quality effort to make AI smarter and the difficulty of making progress, respectively. The average rate of progress clearly increases if more optimization power is brought to bear on the task and decreases if more recalcitrance is encountered. He makes arguments for why the recalcitrance might either increase or decrease as the AGI reaches and transcends human level, so keeping both options on the table is a safe bet. Turning to the optimization power, however, it’s overwhelmingly likely that it will grow rapidly as the AGI transcends human level, for the reasons we saw in the Omega scenario: the main input to further optimization comes not from people but from the machine itself, so the more capable it gets, the faster it improves (if recalcitrance stays fairly constant).

For any process whose power grows at a rate proportional to its current power, the result is that its power keeps doubling at regular intervals. We call such growth exponential, and we call such processes explosions. If baby-making power grows in proportion to the size of the population, we can get a population explosion. If the creation of neutrons capable of fissioning plutonium grows in proportion to the number of such neutrons, we can get a nuclear explosion. If machine intelligence grows at a rate proportional to the current power, we can get an intelligence explosion. All such explosions are characterized by the time they take to double their power. If that time is hours or days for an intelligence explosion, as in the Omega scenario, we have a fast takeoff on our hands.

This explosion timescale depends crucially on whether improving the AI requires merely new software (which can be created in a matter of seconds, minutes or hours) or new hardware (which might require months or years). In the Omega scenario, there was a significant hardware overhang, in Bostrom’s terminology: the Omegas had compensated for the low quality of their original software by vast amounts of hardware, which meant that Prometheus could perform a large number of quality doublings by improving its software alone. There was also a major content overhang in the form of much of the internet’s data; Prometheus 1.0 was still not smart enough to make use of most of it, but once Prometheus’ intelligence grew, the data it needed for further learning was already available without delay.

The hardware and electricity costs of running the AI are crucial as well, since we won’t get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages. Suppose, for example, that the first human-level AGI can be efficiently run on the Amazon cloud at a cost of $1 million per hour of human-level work produced. This AI would have great novelty value and undoubtedly make headlines, but it wouldn’t undergo recursive self-improvement, because it would be much cheaper to keep using humans to improve it. Suppose that these humans gradually manage to cut the cost to $100,000/hour, $10,000/hour, $1,000/hour, $100/hour, $10/hour and finally $1/hour. By the time the cost of using the computer to reprogram itself finally drops far below the cost of paying human programmers to do the same, the humans can be laid off and the optimization power greatly expanded by buying cloud-computing time. This produces further cost cuts, allowing still more optimization power, and the intelligence explosion has begun.

This leaves us with our final key question: “Who or what will control the intelligence explosion and its aftermath, and what are their/its goals?” We’ll explore possible goals and outcomes in the next chapter and more deeply in chapter 7. To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control.

In terms of what will ultimately happen, you’ll currently find serious thinkers all over the map: some contend that the default outcome is doom, while others insist that an awesome outcome is virtually guaranteed. To me, however, this query is a trick question: it’s a mistake to passively ask “what will happen,” as if it were somehow predestined! If a technologically superior alien civilization arrived tomorrow, it would indeed be appropriate to wonder “what will happen” as their spaceships approached, because their power would probably be so far beyond ours that we’d have no influence over the outcome. If a technologically superior AI-fueled civilization arrives because we built it, on the other hand, we humans have great influence over the outcome—influence that we exerted when we created the AI. So we should instead ask: “What should happen? What future do we want?” In the next chapter, we’ll explore a wide spectrum of possible aftermaths of the current race toward AGI, and I’m quite curious how you’d rank them from best to worst. Only once we’ve thought hard about what sort of future we want will we be able to begin steering a course toward a desirable future. If we don’t know what we want, we’re unlikely to get it.

THE BOTTOM LINE:

•If we one day succeed in building human-level AGI, this may trigger an intelligence explosion, leaving us far behind.

•If a group of humans manage to control an intelligence explosion, they may be able to take over the world in a matter of years.

•If humans fail to control an intelligence explosion, the AI itself may take over the world even faster.

•Whereas a rapid intelligence explosion is likely to lead to a single world power, a slow one dragging on for years or decades may be more likely to lead to a multipolar scenario with a balance of power between a large number of rather independent entities.

•The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control. Superintelligence is likely to enable coordination on ever-larger cosmic scales, but it’s unclear whether it will ultimately lead to more totalitarian top-down control or more individual empowerment.

•Cyborgs and uploads are plausible, but arguably not the fastest route to advanced machine intelligence.

•The climax of our current race toward AI may be either the best or the worst thing ever to happen to humanity, with a fascinating spectrum of possible outcomes that we’ll explore in the next chapter.

•We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.