جنجال های هوشیاری

کتاب: زندگی 3.0 / فصل 38

زندگی 3.0

41 فصل

جنجال های هوشیاری

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Controversies of Consciousness

We’ve already discussed the perennial controversy about whether consciousness research is unscientific nonsense and a pointless waste of time. In addition, there are recent controversies at the cutting edge of consciousness research—let’s explore the ones that I find most enlightening.

Giulio Tononi’s IIT has lately drawn not merely praise but also criticism, some of which has been scathing. Scott Aaronson recently had this to say on his blog: “In my opinion, the fact that Integrated Information Theory is wrong—demonstrably wrong, for reasons that go to its core—puts it in something like the top 2% of all mathematical theories of consciousness ever proposed. Almost all competing theories of consciousness, it seems to me, have been so vague, fluffy and malleable that they can only aspire to wrongness.”22 To the credit of both Scott and Giulio, they never came to blows when I watched them debate IIT at a recent New York University workshop, and they politely listened to each other’s arguments. Aaronson showed that certain simple networks of logic gates had extremely high integrated information (Φ) and argued that since they clearly weren’t conscious, IIT was wrong. Giulio countered that if they were built, they would be conscious, and that Scott’s assumption to the contrary was anthropocentrically biased, much as if a slaughterhouse owner claimed that animals couldn’t be conscious just because they couldn’t talk and were very different from humans. My analysis, with which they both agreed, was that they were at odds about whether integration was merely a necessary condition for consciousness (which Scott was OK with) or also a sufficient condition (which Giulio claimed). The latter is clearly a stronger and more contentious claim, which I hope we can soon test experimentally.23 Another controversial IIT claim is that today’s computer architectures can’t be conscious, because the way their logic gates connect gives very low integration.24 In other words, if you upload yourself into a future high-powered robot that accurately simulates every single one of your neurons and synapses, then even if this digital clone looks, talks and acts indistinguishably from you, Giulio claims that it will be an unconscious zombie without subjective experience—which would be disappointing if you uploaded yourself in a quest for subjective immortality.*6 This claim has been challenged by both David Chalmers and AI professor Murray Shanahan by imagining what would happen if you instead gradually replaced the neural circuits in your brain by hypothetical digital hardware perfectly simulating them.25 Although your behavior would be unaffected by the replacement since the simulation is by assumption perfect, your experience would change from conscious initially to unconscious at the end, according to Giulio. But how would it feel in between, as ever more got replaced? When the parts of your brain responsible for your conscious experience of the upper half of your visual field were replaced, would you notice that part of your visual scenery was suddenly missing, but that you mysteriously knew what was there nonetheless, as reported by patients with “blindsight”?26 This would be deeply troubling, because if you can consciously experience any difference, then you can also tell your friends about it when asked—yet by assumption, your behavior can’t change. The only logical possibility compatible with the assumptions is that at exactly the same instance that any one thing disappears from your consciousness, your mind is mysteriously altered so as either to make you lie and deny that your experience changed, or to forget that things had been different.

On the other hand, Murray Shanahan admits that the same gradual-replacement critique can be leveled at any theory claiming that you can act conscious without being conscious, so you might be tempted to conclude that acting and being conscious are one and the same, and that externally observable behavior is therefore all that matters. But then you’d have fallen into the trap of predicting that you’re unconscious while dreaming, even though you know better.

A third IIT controversy is whether a conscious entity can be made of parts that are separately conscious. For example, can society as a whole gain consciousness without the people in it losing theirs? Can a conscious brain have parts that are also conscious on their own? The prediction from IIT is a firm “no,” but not everyone is convinced. For example, some patients with lesions severely reducing communication between the two halves of their brain experience “alien hand syndrome,” where their right brain makes their left hand do things that the patients claim they aren’t causing or understanding—sometimes to the point that they use their other hand to restrain their “alien” hand. How can we be so sure that there aren’t two separate consciousnesses in their head, one in the right hemisphere that’s unable to speak and another in the left hemisphere that’s doing all the talking and claiming to speak for both of them? Imagine using future technology to build a direct communication link between two human brains, and gradually increasing the capacity of this link until communication is as efficient between the brains as it is within them. Would there come a moment when the two individual consciousnesses suddenly disappear and get replaced by a single unified one as IIT predicts, or would the transition be gradual so that the individual consciousnesses coexisted in some form even as a joint experience began to emerge?

Another fascinating controversy is whether experiments underestimate how much we’re conscious of. We saw earlier that although we feel we’re visually conscious of vast amounts of information involving colors, shapes, objects and seemingly everything that’s in front of us, experiments have shown that we can only remember and report a dismally small fraction of this.27 Some researchers have tried to resolve this discrepancy by asking whether we may sometimes have “consciousness without access,” that is, subjective experience of things that are too complex to fit into our working memory for later use.28 For example, when you experience inattentional blindness by being too distracted to notice an object in plain sight, this doesn’t imply that you had no conscious visual experience of it, merely that it wasn’t stored in your working memory.29 Should it count as forgetfulness rather than blindness? Other researchers reject this idea that people can’t be trusted about what they say they experienced, and warn of its implications. Murray Shanahan imagines a clinical trial where patients report complete pain relief thanks to a new wonder drug, which nonetheless gets rejected by a government panel: “The patients only think they are not in pain. Thanks to neuroscience, we know better.”30 On the other hand, there have been cases where patients who accidentally awoke during surgery were given a drug to make them forget the ordeal. Should we trust their subsequent report that they experienced no pain?

How Might AI Consciousness Feel?

If some future AI system is conscious, then what will it subjectively experience? This is the essence of the “even harder problem” of consciousness, and forces us up to the second level of difficulty depicted in figure 8.1. Not only do we currently lack a theory that answers this question, but we’re not even sure whether it’s logically possible to fully answer it. After all, what could a satisfactory answer sound like? How would you explain to a person born blind what the color red looks like?

Fortunately, our current inability to give a complete answer doesn’t prevent us from giving partial answers. Intelligent aliens studying the human sensory system would probably infer that colors are qualia that feel associated with each point on a two-dimensional surface (our visual field), while sounds don’t feel as spatially localized, and pains are qualia that feel associated with different parts of our body. From discovering that our retinas have three types of light-sensitive cone cells, they could infer that we experience three primary colors and that all other color qualia result from combining them. By measuring how long it takes neurons to transmit information across the brain, they could conclude that we experience no more than about ten conscious thoughts or perceptions per second, and that when we watch movies on our TV at twenty-four frames per second, we experience this not as a sequence of still images, but as continuous motion. From measuring how fast adrenaline is released into our bloodstream and how long it remains before being broken down, they could predict that we feel bursts of anger starting within seconds and lasting for minutes.

Applying similar physics-based arguments, we can make some educated guesses about certain aspects of how an artificial consciousness may feel. First of all, the space of possible AI experiences is huge compared to what we humans can experience. We have one class of qualia for each of our senses, but AIs can have vastly more types of sensors and internal representations of information, so we must avoid the pitfall of assuming that being an AI necessarily feels similar to being a person.

Second, a brain-sized artificial consciousness could have millions of times more experiences than us per second, since electromagnetic signals travel at the speed of light—millions of times faster than neuron signals. However, the larger the AI, the slower its global thoughts must be to allow information time to flow between all its parts, as we saw in chapter 4. We’d therefore expect an Earth-sized “Gaia” AI to have only about ten conscious experiences per second, like a human, and a galaxy-sized AI could have only one global thought every 100,000 years or so—so no more than about a hundred experiences during the entire history of our Universe thus far! This would give large AIs a seemingly irresistible incentive to delegate computations to the smallest subsystems capable of handling them, to speed things up, much like our conscious mind has delegated the blink reflex to a small, fast and unconscious subsystem. Although we saw above that the conscious information processing in our brains appears to be merely the tip of an otherwise unconscious iceberg, we should expect the situation to be even more extreme for large future AIs: if they have a single consciousness, then it’s likely to be unaware of almost all the information processing taking place within it. Moreover, although the conscious experiences that it enjoys may be extremely complex, they’re also snail-paced compared to the rapid activities of its smaller parts.

This really brings to a head the aforementioned controversy about whether parts of a conscious entity can be conscious too. IIT predicts not, which means that if a future astronomically large AI is conscious, then almost all its information processing is unconscious. This would mean that if a civilization of smaller AIs improves its communication abilities to the point that a single conscious hive mind emerges, their much faster individual consciousnesses are suddenly extinguished. If the IIT prediction is wrong, on the other hand, the hive mind can coexist with the panoply of smaller conscious minds. Indeed, one could even imagine a nested hierarchy of consciousnesses at all levels from microscopic to cosmic.

As we saw above, the unconscious information processing in our human brains appears linked to the effortless, fast and automatic way of thinking that psychologists call “System 1.”32 For example, your System 1 might inform your consciousness that its highly complex analysis of visual input data has determined that your best friend has arrived, without giving you any idea how the computation took place. If this link between systems and consciousness proves to be valid, then it will be tempting to generalize this terminology to AIs, denoting all rapid routine tasks delegated to unconscious subunits as the AI’s System 1. The effortful, slow and controlled global thinking of the AI would, if conscious, be the AI’s System 2. We humans also have conscious experiences involving what I’ll term “System 0”: raw passive perception that takes place even when you sit without moving or thinking and merely observe the world around you. Systems 0, 1 and 2 seem progressively more complex, so it’s striking that only the middle one appears unconscious. IIT explains this by saying that raw sensory information in System 0 is stored in grid-like brain structures with very high integration, while System 2 has high integration because of feedback loops, where all the information you’re aware of right now can affect your future brain states. On the other hand, it was precisely the conscious-grid prediction that triggered Scott Aaronson’s aforementioned IIT-critique. In summary, if a theory solving the pretty hard problem of consciousness can one day pass a rigorous battery of experimental tests so that we start taking its predictions seriously, then it will also greatly narrow down the options for the even harder problem of what future conscious AIs may experience.

Some aspects of our subjective experience clearly trace back to our evolutionary origins, for example our emotional desires related to self-preservation (eating, drinking, avoiding getting killed) and reproduction. This means that it should be possible to create AI that never experiences qualia such as hunger, thirst, fear or sexual desire. As we saw in the last chapter, if a highly intelligent AI is programmed to have virtually any sufficiently ambitious goal, it’s likely to strive for self-preservation in order to be able to accomplish that goal. If they’re part of a society of AIs, however, they might lack our strong human fear of death: as long as they’ve backed themselves up, all they stand to lose are the memories they’ve accumulated since their most recent backup, as long as they’re confident that their backed-up software will be used. In addition, the ability to readily copy information and software between AIs would probably reduce the strong sense of individuality that’s so characteristic of our human consciousness: there would be less of a distinction between you and me if we could easily share and copy all our memories and abilities, so a group of nearby AIs may feel more like a single organism with a hive mind.

Would an artificial consciousness feel that it had free will? Note that, although philosophers have spent millennia quibbling about whether we have free will without reaching consensus even on how to define the question,33 I’m asking a different question, which is arguably easier to tackle. Let me try to persuade you that the answer is simply “Yes, any conscious decision maker will subjectively feel that it has free will, regardless of whether it’s biological or artificial.” Decisions fall on a spectrum between two extremes: 1.You know exactly why you made that particular choice.

2.You have no idea why you made that particular choice—it felt like you chose randomly on a whim.

Free-will discussions usually center around a struggle to reconcile our goal-oriented decision-making behavior with the laws of physics: if you’re choosing between the following two explanations for what you did, then which one is correct: “I asked her on a date because I really liked her” or “My particles made me do it by moving according to the laws of physics”? But we saw in the last chapter that both are correct: what feels like goal-oriented behavior can emerge from goal-less deterministic laws of physics. More specifically, when a system (brain or AI) makes a decision of type 1, it computes what to decide using some deterministic algorithm, and the reason it feels like it decided is that it in fact did decide when computing what to do. Moreover, as emphasized by Seth Lloyd,34 there’s a famous computer-science theorem saying that for almost all computations, there’s no faster way of determining their outcome than actually running them. This means that it’s typically impossible for you to figure out what you’ll decide to do in a second in less than a second, which helps reinforce your experience of having free will. In contrast, when a system (brain or AI) makes a decision of type 2, it simply programs its mind to base its decision on the output of some subsystem that acts as a random number generator. In brains and computers, effectively random numbers are easily generated by amplifying noise. Regardless of where on the spectrum from 1 to 2 a decision falls, both biological and artificial consciousnesses therefore feel that they have free will: they feel that it is really they who decide and they can’t predict with certainty what the decision will be until they’ve finished thinking it through.

Some people tell me that they find causality degrading, that it makes their thought processes meaningless and that it renders them “mere” machines. I find such negativity absurd and unwarranted. First of all, there’s nothing “mere” about human brains, which, as far as I’m concerned, are the most amazingly sophisticated physical objects in our known Universe. Second, what alternative would they prefer? Don’t they want it to be their own thought processes (the computations performed by their brains) that make their decisions? Their subjective experience of free will is simply how their computations feel from inside: they don’t know the outcome of a computation until they’ve finished it. That’s what it means to say that the computation is the decision.

Meaning

Let’s end by returning to the starting point of this book: How do we want the future of life to be? We saw in the previous chapter how diverse cultures around the globe all seek a future teeming with positive experiences, but that fascinatingly thorny controversies arise when seeking consensus on what should count as positive and how to make trade-offs between what’s good for different life forms. But let’s not let those controversies distract us from the elephant in the room: there can be no positive experiences if there are no experiences at all, that is, if there’s no consciousness. In other words, without consciousness, there can be no happiness, goodness, beauty, meaning or purpose—just an astronomical waste of space. This implies that when people ask about the meaning of life as if it were the job of our cosmos to give meaning to our existence, they’re getting it backward: It’s not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe. So the very first goal on our wish list for the future should be retaining (and hopefully expanding) biological and/or artificial consciousness in our cosmos, rather than driving it extinct.

If we succeed in this endeavor, then how will we humans feel about coexisting with ever smarter machines? Does the seemingly inexorable rise of artificial intelligence bother you and if so, why? In chapter 3, we saw how it should be relatively easy for AI-powered technology to satisfy our basic needs such as security and income as long as the political will to do so exists. However, perhaps you’re concerned that being well fed, clad, housed and entertained isn’t enough. If we’re guaranteed that AI will take care of all our practical needs and desires, might we nonetheless end up feeling that we lack meaning and purpose in our lives, like well-kept zoo animals?

Traditionally, we humans have often founded our self-worth on the idea of human exceptionalism: the conviction that we’re the smartest entities on the planet and therefore unique and superior. The rise of AI will force us to abandon this and become more humble. But perhaps that’s something we should do anyway: after all, clinging to hubristic notions of superiority over others (individuals, ethnic groups, species and so on) has caused awful problems in the past, and may be an idea ready for retirement. Indeed, human exceptionalism hasn’t only caused grief in the past, but it also appears unnecessary for human flourishing: if we discover a peaceful extraterrestrial civilization far more advanced than us in science, art and everything else we care about, this presumably wouldn’t prevent people from continuing to experience meaning and purpose in their lives. We could retain our families, friends and broader communities, and all activities that give us meaning and purpose, hopefully having lost nothing but arrogance.

As we plan our future, let’s consider the meaning not only of our own lives, but also of our Universe itself. Here two of my favorite physicists, Steven Weinberg and Freeman Dyson, represent diametrically opposite views. Weinberg, who won the Nobel Prize for foundational work on the standard model of particle physics, famously said, “The more the universe seems comprehensible, the more it also seems pointless.”35 Dyson, on the other hand, is much more optimistic, as we saw in chapter 6: although he agrees that our Universe was pointless, he believes that life is now filling it with ever more meaning, with the best yet to come if life succeeds in spreading throughout the cosmos. He ended his seminal 1979 paper thus: “Is Weinberg’s universe or mine closer to the truth? One day, before long, we shall know.”36 If our Universe goes back to being permanently unconscious because we drive Earth life extinct or because we let unconscious zombie AI take over our Universe, then Weinberg will be vindicated in spades.

From this perspective, we see that although we’ve focused on the future of intelligence in this book, the future of consciousness is even more important, since that’s what enables meaning. Philosophers like to go Latin on this distinction, by contrasting sapience (the ability to think intelligently) with sentience (the ability to subjectively experience qualia). We humans have built our identity on being Homo sapiens, the smartest entities around. As we prepare to be humbled by ever smarter machines, I suggest that we rebrand ourselves as Homo sentiens!

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.