سرفصل های مهم
سلسله مراتب تفکر
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
Thought Hierarchies
Have you ever tried and failed to swat a fly with your hand? The reason that it can react faster than you is that it’s smaller, so that it takes less time for information to travel between its eyes, brain and muscles. This “bigger = slower” principle applies not only to biology, where the speed limit is set by how fast electrical signals can travel through neurons, but also to future cosmic life if no information can travel faster than light. So for an intelligent information-processing system, going big is a mixed blessing involving an interesting trade-off. On one hand, going bigger lets it contain more particles, which enable more complex thoughts. On the other hand, this slows down the rate at which it can have truly global thoughts, since it now takes longer for the relevant information to propagate to all its parts.
So if life engulfs our cosmos, what form will it choose: simple and fast, or complex and slow? I predict that it will make the same choice as Earth life has made: both! The denizens of Earth’s biosphere span a staggering range of sizes, from gargantuan two-hundred-ton blue whales down to the petite 10-16 kg bacterium Pelagibacter, believed to account for more biomass than all the world’s fish combined. Moreover, organisms that are large, complex and slow often mitigate their sluggishness by containing smaller modules that are simple and fast. For example, your blink reflex is extremely fast precisely because it’s implemented by a small and simple circuit that doesn’t involve most of your brain: if that hard-to-swat fly accidentally heads toward your eye, you’ll blink within a tenth of a second, long before the relevant information has had time to spread throughout your brain and make you consciously aware of what happened. By organizing its information processing into a hierarchy of modules, our biosphere manages to both have the cake and eat it, attaining both speed and complexity. We humans already use this same hierarchical strategy to optimize parallel computing.
Because internal communication is slow and costly, I expect advanced future cosmic life to do the same, so that computations will be done as locally as possible. If a computation is simple enough to do with a 1 kg computer, it’s counterproductive to spread it out over a galaxy-sized computer, since waiting for the information to be shared at the speed of light after each computational step causes a ridiculous delay of about 100,000 years per step.
What, if any, of this future information processing will be conscious in the sense of involving a subjective experience is a controversial and fascinating topic which we’ll explore in chapter 8. If consciousness requires the different parts of the system to be able to communicate with one another, then the thoughts of larger systems are by necessity slower. Whereas you or a future Earth-sized supercomputer can have many thoughts per second, a galaxy-sized mind could have only one thought every hundred thousand years, and a cosmic mind a billion light-years in size would only have time to have about ten thoughts in total before dark energy fragmented it into disconnected parts. On the other hand, these few precious thoughts and accompanying experiences might be quite deep!
Control Hierarchies
If thought itself is organized in a hierarchy spanning a wide range of scales, then what about power? In chapter 4, we explored how intelligent entities naturally organize themselves into power hierarchies in Nash equilibrium, where any entity would be worse off if they altered their strategy. The better the communication and transportation technology gets, the larger these hierarchies can grow. If superintelligence one day expands to cosmic scales, what will its power hierarchy be like? Will it be freewheeling and decentralized or highly authoritarian? Will cooperation be based mainly on mutual benefit or on coercion and threats?
To shed light on these questions, let’s consider both the carrot and the stick: What incentives are there for collaboration on cosmic scales, and what threats might be used to enforce it?
Controlling with the Carrot
On Earth, trade has been a traditional driver of cooperation because the relative difficulty of producing things varies across the planet. If mining a kilogram of silver costs 300 times more than mining a kilogram of copper in one region, but only 100 times more in another, they’ll both come out ahead by trading 200 kg of copper against 1 kg of silver. If one region has much higher technology than another, both can similarly benefit from trading high-tech goods against raw materials.
However, if superintelligence develops technology that can readily rearrange elementary particles into any form of matter whatsoever, then it will eliminate most of the incentive for long-distance trade. Why bother shipping silver between distant solar systems when it’s simpler and quicker to transmute copper into silver by rearranging its particles? Why bother shipping high-tech machinery between galaxies when both the know-how and the raw materials (any matter will do) exist in both places? My guess is that in a cosmos teeming with superintelligence, almost the only commodity worth shipping long distances will be information. The only exception might be matter to be used for cosmic engineering projects—for example, to counteract the aforementioned destructive tendency of dark energy to tear civilizations apart. As opposed to traditional human trade, this matter can be shipped in any convenient bulk form whatsoever, perhaps even as an energy beam, since the receiving superintelligence can rapidly rearrange it into whatever objects it wants.
If sharing or trading of information emerges as the main driver of cosmic cooperation, then what sorts of information might be involved? Any desirable information will be valuable if generating it requires a massive and time-consuming computational effort. For example, a superintelligence may want answers to hard scientific questions about the nature of physical reality, hard mathematical questions about theorems and optimal algorithms and hard engineering questions about how to best build spectacular technology. Hedonistic life forms may want awesome digital entertainment and simulated experiences, and cosmic commerce may fuel demand for some form of cosmic cryptocurrency in the spirit of bitcoins.
Such sharing opportunities may incentivize information flow not only between entities of roughly equal power, but also up and down power hierarchies, say between solar-system-sized nodes and a galactic hub or between galaxy-sized nodes and a cosmic hub. The nodes might want this for the pleasure of being part of something greater, for being provided with answers and technologies that they couldn’t develop alone and for defense against external threats. They may also value the promise of near immortality through backup: just as many humans take solace in a belief that their minds will live on after their physical bodies die, an advanced AI may appreciate having its mind and knowledge live on in a hub supercomputer after its original physical hardware has depleted its energy reserves.
Conversely, the hub may want its nodes to help it with massive long-term computing tasks where the results aren’t urgently needed, so that it’s worth waiting thousands or millions of years for the answers. As we explored above, the hub may also want its nodes to help carry out massive cosmic engineering projects such as counteracting destructive dark energy by moving galactic mass concentrations together. If traversable wormholes turn out to be possible and buildable, then a top priority of a hub will probably be constructing a network of them to thwart dark energy and keep its empire connected indefinitely. The questions of what ultimate goals a cosmic superintelligence may have is a fascinating and controversial one that we’ll explore further in chapter 7.
Controlling with the Stick
Terrestrial empires usually compel their subordinates to cooperate by using both the carrot and the stick. While subjects of the Roman Empire valued the technology, infrastructure and defense that they were offered as a reward for their cooperation, they also feared the inevitable repercussions of rebelling or not paying taxes. Because of the long time required to send troops from Rome to outlying provinces, part of the intimidation was delegated to local troops and loyal officials empowered to inflict near-instantaneous punishments. A superintelligent hub could use the analogous strategy of deploying a network of loyal guards throughout its cosmic empire. Since superintelligent subjects can be hard to control, the simplest viable strategy may be using AI guards that are programmed to be 100% loyal by virtue of being relatively dumb, simply monitoring whether all rules are obeyed and automatically triggering a doomsday device if not.
Suppose, for example, that the hub AI arranges for a white dwarf to be placed in the vicinity of a solar-system-sized civilization that it wishes to control. A white dwarf is the burnt-out husk of a modestly heavy star. Consisting largely of carbon, it resembles a giant diamond in the sky, and is so compact that it can weigh more than the Sun while being smaller than Earth. The Indian physicist Subrahmanyan Chandrasekhar famously proved that if you keep adding mass to it until it surpasses the Chandrasekhar limit, about 1.4 times the mass of our Sun, it will undergo a cataclysmic thermonuclear detonation known as a supernova of type 1A. If the hub AI has callously arranged for this white dwarf to be extremely close to its Chandrasekhar limit, the guard AI could be effective even if it were extremely dumb (indeed, largely because it was so dumb): it could be programmed to simply verify that the subjugated civilization had delivered its monthly quota of cosmic bitcoins, mathematical proofs or whatever other taxes were stipulated, and if not, toss enough mass onto the white dwarf to ignite the supernova and blow the entire region to smithereens.
Galaxy-sized civilizations may be similarly controllable by placing large numbers of compact objects into tight orbits around the monster black hole at the galaxy center, and threatening to transform these masses into gas, for instance by colliding them. This gas would then start feeding the black hole, transforming it into a powerful quasar, potentially rendering much of the galaxy uninhabitable.
In summary, there are strong incentives for future life to cooperate over cosmic distances, but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats—the limits imposed by physics appear to allow both scenarios, so the outcome will depend on the prevailing goals and values. We’ll explore our ability to influence these goals and values of future life in chapter 7.
When Civilizations Clash
So far, we’ve only discussed scenarios where life expands into our cosmos from a single intelligence explosion. But what happens if life evolves independently in more than one place and two expanding civilizations meet?
If you consider a random solar system, there’s some probability that life will evolve on one of its planets, develop advanced technology and expand into space. This probability seems to be greater than zero since technological life has evolved here in our Solar System and the laws of physics appear to allow space settlement. If space is large enough (indeed, the theory of cosmological inflation suggests it to be vast or infinite), then there will be many such expanding civilizations, as illustrated in figure 6.10. Jay Olson’s above-mentioned paper includes an elegant analysis of such expanding cosmic biospheres, and Toby Ord has performed a similar analysis with colleagues at the Future of Humanity Institute. Viewed in three dimensions, these cosmic biospheres are quite literally spheres as long as civilizations expand with the same speed in all directions. In spacetime, they look like the upper part of the champagne glass in figure 6.7, because dark energy ultimately limits how many galaxies each civilization can reach.
If the distance between neighboring space-settling civilizations is much larger than dark energy lets them expand, then they’ll never come into contact with each other or even find out about each other’s existence, so they’ll feel as if they’re alone in the cosmos. If our cosmos is more fecund so that neighbors are closer together, however, some civilizations will eventually overlap. What happens in these overlap regions? Will there be cooperation, competition or war?
Europeans were able to conquer Africa and the Americas because they had superior technology. In contrast, it’s plausible that long before two superintelligent civilizations encounter one another, their technologies will plateau at the same level, limited merely by the laws of physics. This makes it seem unlikely that one superintelligence could easily conquer the other even if it wanted to. Moreover, if their goals have evolved to be relatively aligned, then they may have little reason to desire conquest or war. For example, if they’re both trying to prove as many beautiful theorems as possible and invent as clever algorithms as possible, they can simply share their findings and both be better off. After all, information is very different from the resources that humans usually fight over, in that you can simultaneously give it away and keep it.
Some expanding civilizations might have goals that are essentially immutable, such as those of a fundamentalist cult or a spreading virus. However, it’s also plausible that some advanced civilizations are more like open-minded humans—willing to adjust their goals when presented with sufficiently compelling arguments. If two of them meet, there will be a clash not of weapons but of ideas, where the most persuasive one prevails and has its goals spread at the speed of light through the region controlled by the other civilization. Assimilating your neighbors is a faster expansion strategy than settlement, since your sphere of influence can spread at the speed with which ideas move (the speed of light using telecommunication), whereas physical settlement inevitably progresses slower than the speed of light. This assimilation will not be forced such as that infamously employed by the Borg in Star Trek, but voluntary based on the persuasive superiority of ideas, leaving the assimilated better off.
We’ve seen that the future cosmos can contain rapidly expanding bubbles of two kinds: expanding civilizations and those death bubbles that expand at light speed and make space uninhabitable by destroying all our elementary particles. An ambitious civilization can thus encounter three kinds of regions: uninhabited ones, life bubbles and death bubbles. If it fears uncooperative rival civilizations, it has a strong incentive to launch a rapid “land grab” and settle the uninhabited regions before the rivals do. However, it has the same expansionist incentive even if there are no other civilizations, simply to acquire resources before dark energy makes them unreachable. We just saw how bumping into another expanding civilization can be either better or worse than bumping into uninhabited space, depending on how cooperative and open-minded this neighbor is. However, it’s better to bump into any expansionist civilization (even one trying to convert your civilization into paper clips) than a death bubble, which will continue expanding at the speed of light regardless of whether you try to fight it or reason with it. Our only protection against death bubbles is dark energy, which prevents distant ones from ever reaching us. So if death bubbles are indeed common, then dark energy is actually not our enemy but our friend.
Are We Alone?
Many people take for granted that there’s advanced life throughout much of our Universe, so that human extinction wouldn’t matter much from a cosmic perspective. After all, why should we worry about wiping ourselves out if some inspiring Star Trek–like civilization would soon swoop in and re-seed our Solar System with life, perhaps even using their advanced technology to reconstruct and resuscitate us? I view this Star Trek assumption as dangerous, because it can lull us into a false sense of security and make our civilization apathetic and reckless. Indeed, I think that this assumption that we’re not alone in our Universe is not only dangerous but also probably false.
This is a minority view,*9 and I may well be wrong, but it’s at the very least a possibility that we can’t currently dismiss, which gives us a moral imperative to play it safe and not drive our civilization extinct.
When I give lectures about cosmology, I often ask the audience to raise their hands if they think there’s intelligent life elsewhere in our Universe (the region of space from which light has reached us so far during the 13.8 billion years since our Big Bang). Infallibly, almost everyone does, from kindergartners to college students. When I ask why, the basic answer I tend to get is that our Universe is so huge that there’s got to be life somewhere, at least statistically speaking. Let’s take a closer look at this argument and pinpoint its weakness.
It all comes down to one number: the typical distance between a civilization in figure 6.10 and its nearest neighbor. If this distance is much larger than 20 billion light-years, we should expect to be alone in our Universe (the part of space from which light has reached us during the 13.8 billion years since our Big Bang), and to never make contact with aliens. So what should we expect for this distance? We’re quite clueless. This means that the distance to our neighbor is in the ballpark of 1000…000 meters, where the total number of zeroes could reasonably be 21, 22, 23,…, 100, 101, 102 or more—but probably not much smaller than 21, since we haven’t yet seen compelling evidence of aliens (see figure 6.11). For our nearest neighbor civilization to be within our Universe, whose radius is about 1026 meters, the number of zeroes can’t exceed 26, and the probability of the number of zeroes falling in the narrow range between 22 and 26 is rather small. This is why I think we’re alone in our Universe.
I give a detailed justification of this argument in my book Our Mathematical Universe, so I won’t rehash it here, but the basic reason for why we’re clueless about this neighbor distance is that we’re in turn clueless about the probability of intelligent life arising in a given place. As the American astronomer Frank Drake pointed out, this probability can be calculated by multiplying together the probability of there being a habitable environment there (say an appropriate planet), the probability that life will form there and the probability that this life will evolve to become intelligent. When I was a grad student, we had no clue about any of these three probabilities. After the past two decades’ dramatic discoveries of planets orbiting other stars, it now seems likely that habitable planets are abundant, with billions in our own Galaxy alone. The probability of evolving life and then intelligence, however, remains extremely uncertain: some experts think that one or both are rather inevitable and occur on most habitable planets, while others think that one or both are extremely rare because of one or more evolutionary bottlenecks that require a wild stroke of luck to pass through. Some proposed bottlenecks involve chicken-and-egg problems at the earliest stages of self-reproducing life: for example, for a modern cell to build a ribosome, the highly complex molecular machine that reads our genetic code and builds our proteins, it needs another ribosome, and it’s not obvious that the very first ribosome could evolve gradually from something simpler.10 Other proposed bottlenecks involve the development of higher intelligence. For example, although dinosaurs ruled Earth for over 100 million years, a thousand times longer than we modern humans have been around, evolution didn’t seem to inevitably push them toward higher intelligence and inventing telescopes or computers.
Some people counter my argument by saying that, yes, intelligent life could be very rare, but in fact it isn’t—our Galaxy is teeming with intelligent life that mainstream scientists are simply not noticing. Perhaps aliens have already visited Earth, as UFO enthusiasts claim. Perhaps aliens haven’t visited Earth, but they’re out there and they’re deliberately hiding from us (this has been called the “zoo hypothesis” by the U.S. astronomer John A. Ball, and features in sci-fi classics such as Olaf Stapledon’s Star Maker). Or perhaps they’re out there without deliberately hiding: they’re simply not interested in space settlement or large engineering projects that we’d have noticed.
Sure, we need to keep an open mind about these possibilities, but since there’s no generally accepted evidence for any of them, we also need to take seriously the alternative: that we’re alone. Moreover, I think we shouldn’t underestimate the diversity of advanced civilizations by assuming that they all share goals that make them go unnoticed: we saw above that resource acquisition is quite a natural goal for a civilization to have, and for us to notice, all it takes is one civilization deciding to overtly settle all it can and hence engulf our Galaxy and beyond. Confronted with the fact that there are millions of habitable Earth-like planets in our Galaxy that are billions of years older than Earth, giving ample time for ambitious inhabitants to settle the Galaxy, we therefore can’t dismiss the most obvious interpretation: that the origin of life requires a random fluke so unlikely that they’re all uninhabited.
If life is not rare after all, we may soon know. Ambitious astronomical surveys are searching atmospheres of Earth-like planets for evidence of oxygen produced by life. In parallel with this search for any life, the search for intelligent life was recently boosted by the Russian philanthropist Yuri Milner’s $100 million project “Breakthrough Listen.” It’s important not to be overly anthropocentric when searching for advanced life: if we discover an extraterrestrial civilization, it’s likely to already have gone superintelligent. As Martin Rees put it in a recent essay, “the history of human technological civilization is measured in centuries—and it may be only one or two more centuries before humans are overtaken or transcended by inorganic intelligence, which will then persist, continuing to evolve, for billions of years….We would be most unlikely to ‘catch’ it in the brief sliver of time when it took organic form.”11 I agree with Jay Olson’s conclusion in his aforementioned space settlement paper: “We regard the possibility that advanced intelligence will make use of the universe’s resources to simply populate existing earthlike planets with advanced versions of humans as an unlikely endpoint to the progression of technology.” So when you imagine aliens, don’t think of little green fellows with two arms and two legs, but think of the superintelligent spacefaring life we explored earlier in this chapter.
Although I’m a strong supporter of all the ongoing searches for extraterrestrial life, which are shedding light on one of the most fascinating questions in science, I’m secretly hoping that they’ll all fail and find nothing! The apparent incompatibility between the abundance of habitable planets in our Galaxy and the lack of extraterrestrial visitors, known as the Fermi paradox, suggests the existence of what the economist Robin Hanson calls a “Great Filter,” an evolutionary/technological roadblock somewhere along the developmental path from the non-living matter to space-settling life. If we discover independently evolved life elsewhere, this would suggest that primitive life isn’t rare, and that the roadblock lies after our current human stage of development—perhaps because space settlement is impossible, or because almost all advanced civilizations self-destruct before they’re able to go cosmic. I’m therefore crossing my fingers that all searches for extraterrestrial life find nothing: this is consistent with the scenario where evolving intelligent life is rare but we humans got lucky, so that we have the roadblock behind us and have extraordinary future potential.
Outlook
So far, we’ve spent this book exploring the history of life in our Universe, from its humble beginnings billions of years ago to possible grand futures billions of years from now. If our current AI development eventually triggers an intelligence explosion and optimized space settlement, it will be an explosion in a truly cosmic sense: after spending billions of years as an almost negligibly small perturbation on an indifferent lifeless cosmos, life suddenly explodes onto the cosmic arena as a spherical blast wave expanding near the speed of light, never slowing down, and igniting everything in its path with the spark of life.
Such optimistic views of the importance of life in our cosmic future have been eloquently articulated by many of the thinkers we’ve encountered in this book. Because sci-fi authors are often dismissed as unrealistic romantic dreamers, I find it ironic that most sci-fi and scientific writing about space settlement now appears too pessimistic in the light of superintelligence. For example, we saw how intergalactic travel becomes much easier once people and other intelligent entities can be transmitted in digital form, potentially making us masters of our own destiny not only in our Solar System or the Milky Way Galaxy, but also in the cosmos.
Above we considered the very real possibility that we’re the only high-tech civilization in our Universe. Let’s spend the rest of this chapter exploring this scenario, and the huge moral responsibility it entails. This means that after 13.8 billion years, life in our Universe has reached a fork in the road, facing a choice between flourishing throughout the cosmos or going extinct. If we don’t keep improving our technology, the question isn’t whether humanity will go extinct, but how. What will get us first—an asteroid, a supervolcano, the burning heat of the aging Sun, or some other calamity (see figure 5.1)? Once we’re gone, the cosmic drama predicted by Freeman Dyson will play on without spectators: barring a cosmocalypse, stars burn out, galaxies fade and black holes evaporate, each ending its life with a huge explosion that releases over a million times as much energy as the Tsar Bomba, the most powerful hydrogen bomb ever built. As Freeman put it: “The cold expanding universe will be illuminated by occasional fireworks for a very long time.” Alas, this fireworks display will be a meaningless waste, with nobody there to enjoy it.
Without technology, our human extinction is imminent in the cosmic context of tens of billions of years, rendering the entire drama of life in our Universe merely a brief and transient flash of beauty, passion and meaning in a near eternity of meaninglessness experienced by nobody. What a wasted opportunity that would be! If instead of eschewing technology, we choose to embrace it, then we up the ante: we gain the potential both for life to survive and flourish and for life to go extinct even sooner, self-destructing due to poor planning (see figure 5.1). My vote is for embracing technology, and proceeding not with blind faith in what we build, but with caution, foresight and careful planning.
After 13.8 billion years of cosmic history, we find ourselves in a breathtakingly beautiful Universe, which through us humans has come alive and started becoming aware of itself. We’ve seen that life’s future potential in our Universe is grander than the wildest dreams of our ancestors, tempered by an equally real potential for intelligent life to go permanently extinct. Will life in our Universe fulfill its potential or squander it? This depends to a great extent on what we humans alive today do during our lifetime, and I’m optimistic that we can make the future of life truly awesome if we make the right choices. What should we want and how can we attain those goals? Let’s spend the rest of the book exploring some of the most difficult challenges involved and what we can do about them.
THE BOTTOM LINE:
•Compared to cosmic timescales of billions of years, an intelligence explosion is a sudden event where technology rapidly plateaus at a level limited only by the laws of physics.
•This technological plateau is vastly higher than today’s technology, allowing a given amount of matter to generate about ten billion times more energy (using sphalerons or black holes), store 12–18 orders of magnitude more information or compute 31–41 orders of magnitude faster—or to be converted to any other desired form of matter.
•Superintelligent life would not only make such dramatically more efficient use of its existing resources, but would also be able to grow today’s biosphere by about 32 orders of magnitude by acquiring more resources through cosmic settlement at near light speed.
•Dark energy limits the cosmic expansion of superintelligent life and also protects it from distant expanding death bubbles or hostile civilizations. The threat of dark energy tearing cosmic civilizations apart motivates massive cosmic engineering projects, including wormhole construction if this turns out to be feasible.
•The main commodity shared or traded across cosmic distances is likely to be information.
•Barring wormholes, the light-speed limit on communication poses severe challenges for coordination and control across a cosmic civilization. A distant central hub may incentivize its superintelligent “nodes” to cooperate either through rewards or through threats, say by deploying a local guard AI programmed to destroy the node by setting off a supernova or quasar unless the rules are obeyed.
•The collision of two expanding civilizations may result in assimilation, cooperation or war, where the latter is arguably less likely than it is between today’s civilizations.
•Despite popular belief to the contrary, it’s quite plausible that we’re the only life form capable of making our observable Universe come alive in the future.
•If we don’t improve our technology, the question isn’t whether humanity will go extinct, but merely how: will an asteroid, a supervolcano, the burning heat of the aging Sun or some other calamity get us first?
•If we do keep improving our technology with enough care, foresight and planning to avoid pitfalls, life has the potential to flourish on Earth and far beyond for many billions of years, beyond the wildest dreams of our ancestors.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.