فصل هشتم: هوشیاری

کتاب: زندگی 3.0 / فصل 35

زندگی 3.0

41 فصل

فصل هشتم: هوشیاری

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

Chapter 8

Consciousness

I cannot imagine a consistent theory of everything that ignores consciousness.

Andrei Linde, 2002

We should strive to grow consciousness itself—to generate bigger, brighter lights in an otherwise dark universe.

Giulio Tononi, 2012

We’ve seen that AI can help us create a wonderful future if we manage to find answers to some of the oldest and toughest problems in philosophy—by the time we need them. We face, in Nick Bostrom’s words, philosophy with a deadline. In this chapter, let’s explore one of the thorniest philosophical topics of all: consciousness.

Who Cares?

Consciousness is controversial. If you mention the “C-word” to an AI researcher, neuroscientist or psychologist, they may roll their eyes. If they’re your mentor, they might instead take pity on you and try to talk you out of wasting your time on what they consider a hopeless and unscientific problem. Indeed, my friend Christof Koch, a renowned neuroscientist who leads the Allen Institute for Brain Science, told me that he was once warned of working on consciousness before he had tenure—by none less than Nobel laureate Francis Crick. If you look up “consciousness” in the 1989 Macmillan Dictionary of Psychology, you’re informed that “Nothing worth reading has been written on it.”1 As I’ll explain in this chapter, I’m more optimistic!

Although thinkers have pondered the mystery of consciousness for thousands of years, the rise of AI adds a sudden urgency, in particular to the question of predicting which intelligent entities have subjective experiences. As we saw in chapter 3, the question of whether intelligent machines should be granted some form of rights depends crucially on whether they’re conscious and can suffer or feel joy. As we discussed in chapter 7, it becomes hopeless to formulate utilitarian ethics based on maximizing positive experiences without knowing which intelligent entities are capable of having them. As mentioned in chapter 5, some people might prefer their robots to be unconscious to avoid feeling slave-owner guilt. On the other hand, they may desire the opposite if they upload their minds to break free from biological limitations: after all, what’s the point of uploading yourself into a robot that talks and acts like you if it’s a mere unconscious zombie, by which I mean that being the uploaded you doesn’t feel like anything? Isn’t this equivalent to committing suicide from your subjective point of view, even though your friends may not realize that your subjective experience has died?

For the long-term cosmic future of life (chapter 6), understanding what’s conscious and what’s not becomes pivotal: if technology enables intelligent life to flourish throughout our Universe for billions of years, how can we be sure that this life is conscious and able to appreciate what’s happening? If not, then would it be, in the words of the famous physicist Erwin Schrödinger, “a play before empty benches, not existing for anybody, thus quite properly speaking not existing”?2 In other words, if we enable high-tech descendants that we mistakenly think are conscious, would this be the ultimate zombie apocalypse, transforming our grand cosmic endowment into nothing but an astronomical waste of space?

What Is Consciousness?

Many arguments about consciousness generate more heat than light because the antagonists are talking past each other, unaware that they’re using different definitions of the C-word. Just as with “life” and “intelligence,” there’s no undisputed correct definition of the word “consciousness.” Instead, there are many competing ones, including sentience, wakefulness, self-awareness, access to sensory input and ability to fuse information into a narrative.3 In our exploration of the future of intelligence, we want to take a maximally broad and inclusive view, not limited to the sorts of biological consciousness that exist so far. That’s why the definition I gave in chapter 1, which I’m sticking with throughout this book, is very broad: consciousness = subjective experience

In other words, if it feels like something to be you right now, then you’re conscious. It’s this particular definition of consciousness that gets to the crux of all the AI-motivated questions in the previous section: Does it feel like something to be Prometheus, AlphaGo or a self-driving Tesla?

To appreciate how broad our consciousness definition is, note that it doesn’t mention behavior, perception, self-awareness, emotions or attention. So by this definition, you’re conscious also when you’re dreaming, even though you lack wakefulness or access to sensory input and (hopefully!) aren’t sleepwalking and doing things. Similarly, any system that experiences pain is conscious in this sense, even if it can’t move. Our definition leaves open the possibility that some future AI systems may be conscious too, even if they exist merely as software and aren’t connected to sensors or robotic bodies.

With this definition, it’s hard not to care about consciousness. As Yuval Noah Harari puts it in his book Homo Deus:4 “If any scientist wants to argue that subjective experiences are irrelevant, their challenge is to explain why torture or rape are wrong without reference to any subjective experience.” Without such reference, it’s all just a bunch of elementary particles moving around according to the laws of physics—and what’s wrong with that?

What’s the Problem?

So what precisely is it that we don’t understand about consciousness? Few have thought harder about this question than David Chalmers, a famous Australian philosopher rarely seen without a playful smile and a black leather jacket—which my wife liked so much that she gave me a similar one for Christmas. He followed his heart into philosophy despite making the finals at the International Mathematics Olympiad—and despite the fact that his only B grade in college, shattering his otherwise straight As, was for an introductory philosophy course. Indeed, he seems utterly undeterred by put-downs or controversy, and I’ve been astonished by his ability to politely listen to uninformed and misguided criticism of his own work without even feeling the need to respond.

As David has emphasized, there are really two separate mysteries of the mind. First, there’s the mystery of how a brain processes information, which David calls the “easy” problems. For example, how does a brain attend to, interpret and respond to sensory input? How can it report on its internal state using language? Although these questions are actually extremely difficult, they’re by our definitions not mysteries of consciousness, but mysteries of intelligence: they ask how a brain remembers, computes and learns. Moreover, we saw in the first part of the book how AI researchers have started to make serious progress on solving many of these “easy problems” with machines—from playing Go to driving cars, analyzing images and processing natural language.

Then there’s the separate mystery of why you have a subjective experience, which David calls the hard problem. When you’re driving, you’re experiencing colors, sounds, emotions, and a feeling of self. But why are you experiencing anything at all? Does a self-driving car experience anything at all? If you’re racing against a self-driving car, you’re both inputting information from sensors, processing it and outputting motor commands. But subjectively experiencing driving is something logically separate—is it optional, and if so, what causes it?

I approach this hard problem of consciousness from a physics point of view. From my perspective, a conscious person is simply food, rearranged. So why is one arrangement conscious, but not the other? Moreover, physics teaches us that food is simply a large number of quarks and electrons, arranged in a certain way. So which particle arrangements are conscious and which aren’t?

What I like about this physics perspective is that it transforms the hard problem that we as humans have struggled with for millennia into a more focused version that’s easier to tackle with the methods of science. Instead of starting with a hard problem of why an arrangement of particles can feel conscious, let’s start with a hard fact that some arrangements of particles do feel conscious while others don’t. For example, you know that the particles that make up your brain are in a conscious arrangement right now, but not when you’re in deep dreamless sleep.

This physics perspective leads to three separate hard questions about consciousness, as shown in figure 8.1. First of all, what properties of the particle arrangement make the difference? Specifically, what physical properties distinguish conscious and unconscious systems? If we can answer that, then we can figure out which AI systems are conscious. In the more immediate future, it can also help emergency-room doctors determine which unresponsive patients are conscious.

Second, how do physical properties determine what the experience is like? Specifically, what determines qualia, basic building blocks of consciousness such as the redness of a rose, the sound of a cymbal, the smell of a steak, the taste of a tangerine or the pain of a pinprick?*2

Third, why is anything conscious? In other words, is there some deep undiscovered explanation for why clumps of matter can be conscious, or is this just an unexplainable brute fact about the way the world works?

The computer scientist Scott Aaronson, a former MIT colleague of mine, has lightheartedly called the first question the “pretty hard problem” (PHP), as has David Chalmers. In that spirit, let’s call the other two the “even harder problem” (EHP) and the “really hard problem” (RHP), as illustrated in figure 8.1.

Is Consciousness Beyond Science?

When people tell me that consciousness research is a hopeless waste of time, the main argument they give is that it’s “unscientific” and always will be. But is that really true? The influential Austro-British philosopher Karl Popper popularized the now widely accepted adage “If it’s not falsifiable, it’s not scientific.” In other words, science is all about testing theories against observations: if a theory can’t be tested even in principle, then it’s logically impossible to ever falsify it, which by Popper’s definition means that it’s unscientific.

So could there be a scientific theory that answers any of the three consciousness questions in figure 8.1? Please let me try to persuade you that the answer is a resounding YES!, at least for the pretty hard problem: “What physical properties distinguish conscious and unconscious systems?” Suppose that someone has a theory that, given any physical system, answers the question of whether the system is conscious with “yes,” “no” or “unsure.” Let’s hook your brain up to a device that measures some of the information processing in different parts of your brain, and let’s feed this information into a computer program that uses the consciousness theory to predict which parts of that information are conscious, and presents you with its predictions in real time on a screen, as in figure 8.2. First you think of an apple. The screen informs you that there’s information about an apple in your brain which you’re aware of, but that there’s also information in your brainstem about your pulse that you’re unaware of. Would you be impressed? Although the first two predictions of the theory were correct, you decide to do some more rigorous testing. You think about your mother and the computer informs you that there’s information in your brain about your mother but that you’re unaware of this. The theory made an incorrect prediction, which means that it’s ruled out and goes in the garbage dump of scientific history together with Aristotelian mechanics, the luminiferous aether, geocentric cosmology and countless other failed ideas. Here’s the key point: Although the theory was wrong, it was scientific! Had it not been scientific, you wouldn’t have been able to test it and rule it out.

Someone might criticize this conclusion and say that they have no evidence of what you’re conscious of, or even of you being conscious at all: although they heard you say that you’re conscious, an unconscious zombie could conceivably say the same thing. But this doesn’t make that consciousness theory unscientific, because they can trade places with you and test whether it correctly predicts their own conscious experiences.

On the other hand, if the theory refuses to make any predictions, merely replying “unsure” whenever queried, then it’s untestable and hence unscientific. This might happen because it’s applicable only in some situations, because the required computations are too hard to carry out in practice or because the brain sensors are no good. Today’s most popular scientific theories tend to be somewhere in the middle, giving testable answers to some but not all of our questions. For example, our core theory of physics will refuse to answer questions about systems that are simultaneously extremely small (requiring quantum mechanics) and extremely heavy (requiring general relativity), because we haven’t yet figured out which mathematical equations to use in this case. This core theory will also refuse to predict the exact masses of all possible atoms—in this case, we think we have the necessary equations, but we haven’t managed to accurately compute their solutions. The more dangerously a theory lives by sticking its neck out and making testable predictions, the more useful it is, and the more seriously we take it if it survives all our attempts to kill it. Yes, we can only test some predictions of consciousness theories, but that’s how it is for all physical theories. So let’s not waste time whining about what we can’t test, but get to work testing what we can test!

In summary, any theory predicting which physical systems are conscious (the pretty hard problem) is scientific, as long as it can predict which of your brain processes are conscious. However, the testability issue becomes less clear for the higher-up questions in figure 8.1. What would it mean for a theory to predict how you subjectively experience the color red? And if a theory purports to explain why there is such a thing as consciousness in the first place, then how do you test it experimentally? Just because these questions are hard doesn’t mean that we should avoid them, and we’ll indeed return to them below. But when confronted with several related unanswered questions, I think it’s wise to tackle the easiest one first. For this reason, my consciousness research at MIT is focused squarely on the base of the pyramid in figure 8.1. I recently discussed this strategy with my fellow physicist Piet Hut from Princeton, who joked that trying to build the top of the pyramid before the base would be like worrying about the interpretation of quantum mechanics before discovering the Schrödinger equation, the mathematical foundation that lets us predict the outcomes of our experiments.

When discussing what’s beyond science, it’s important to remember that the answer depends on time! Four centuries ago, Galileo Galilei was so impressed by math-based physics theories that he described nature as “a book written in the language of mathematics.” If he threw a grape and a hazelnut, he could accurately predict the shapes of their trajectories and when they would hit the ground. Yet he had no clue why one was green and the other brown, or why one was soft and the other hard—these aspects of the world were beyond the reach of science at the time. But not forever! When James Clerk Maxwell discovered his eponymous equations in 1861, it became clear that light and colors could also be understood mathematically. We now know that the aforementioned Schrödinger equation, discovered in 1925, can be used to predict all properties of matter, including what’s soft or hard. While theoretical progress has enabled ever more scientific predictions, technological progress has enabled ever more experimental tests: almost everything we now study with telescopes, microscopes or particle colliders was once beyond science. In other words, the purview of science has expanded dramatically since Galileo’s days, from a tiny fraction of all phenomena to a large percentage, including subatomic particles, black holes and our cosmic origins 13.8 billion years ago. This raises the question: What’s left?

To me, consciousness is the elephant in the room. Not only do you know that you’re conscious, but it’s all you know with complete certainty—everything else is inference, as René Descartes pointed out back in Galileo’s time. Will theoretical and technological progress eventually bring even consciousness firmly into the domain of science? We don’t know, just as Galileo didn’t know whether we’d one day understand light and matter.*4 Only one thing is guaranteed: we won’t succeed if we don’t try! That’s why I and many other scientists around the world are trying hard to formulate and test theories of consciousness.

Experimental Clues About Consciousness

Lots of information processing is taking place in our heads right now. Which of it is conscious and which isn’t? Before exploring consciousness theories and what they predict, let’s look at what experiments have taught us so far, ranging from traditional low-tech or no-tech observations to state-of-the-art brain measurements.

What Behaviors Are Conscious?

If you multiply 32 by 17 in your head, you’re conscious of many of the inner workings of your computation. But suppose I instead show you a portrait of Albert Einstein and tell you to say the name of its subject. As we saw in chapter 2, this too is a computational task: your brain is evaluating a function whose input is information from your eyes about a large number of pixel colors and whose output is information to muscles controlling your mouth and vocal cords. Computer scientists call this task “image classification” followed by “speech synthesis.” Although this computation is way more complicated than your multiplication task, you can do it much faster, seemingly without effort, and without being conscious of the details of how you do it. Your subjective experience consists merely of looking at the picture, experiencing a feeling of recognition and hearing yourself say “Einstein.” Psychologists have long known that you can unconsciously perform a wide range of other tasks and behaviors as well, from blink reflexes to breathing, reaching, grabbing and keeping your balance. Typically, you’re consciously aware of what you did, but not how you did it. On the other hand, behaviors that involve unfamiliar situations, self-control, complicated logical rules, abstract reasoning or manipulation of language tend to be conscious. They’re known as behavioral correlates of consciousness, and they’re closely linked to the effortful, slow and controlled way of thinking that psychologists call “System 2.”5 It’s also known that you can convert many routines from conscious to unconscious through extensive practice, for example walking, swimming, bicycling, driving, typing, shaving, shoe tying, computer-gaming and piano playing.6 Indeed, it’s well known that experts do their specialties best when they’re in a state of “flow,” aware only of what’s happening at a higher level, and unconscious of the low-level details of how they’re doing it. For example, try reading the next sentence while being consciously aware of every single letter, as when you first learned to read. Can you feel how much slower it is, compared to when you’re merely conscious of the text at the level of words or ideas?

Indeed, unconscious information processing appears not only to be possible, but also to be more the rule than the exception. Evidence suggests that of the roughly 107 bits of information that enter our brain each second from our sensory organs, we can be aware only of a tiny fraction, with estimates ranging from 10 to 50 bits.7 This suggests that the information processing that we’re consciously aware of is merely the tip of the iceberg.

Taken together, these clues have led some researchers to suggest that conscious information processing should be thought of as the CEO of our mind, dealing with only the most important decisions requiring complex analysis of data from all over the brain.8 This would explain why, just like the CEO of a company, it usually doesn’t want to be distracted by knowing everything its underlings are up to—but it can find them out if desired. To experience this selective attention in action, look at that word “desired” again: fix your gaze on the dot over the “i” and, without moving your eyes, shift your attention from the dot to the whole letter and then to the whole word. Although the information from your retina stayed the same, your conscious experience changed. The CEO metaphor also explains why expertise becomes unconscious: after painstakingly figuring out how to read and type, the CEO delegates these routine tasks to unconscious subordinates to be able to focus on new higher-level challenges.

Where Is Consciousness?

Clever experiments and analyses have suggested that consciousness is limited not merely to certain behaviors, but also to certain parts of the brain. Which are the prime suspects? Many of the first clues came from patients with brain lesions: localized brain damage caused by accidents, strokes, tumors or infections. But this was often inconclusive. For example, does the fact that lesions in the back of the brain can cause blindness mean that this is the site of visual consciousness, or does it merely mean that visual information passes through there en route to wherever it will later become conscious, just as it first passes through the eyes?

Although lesions and medical interventions haven’t pinpointed the locations of conscious experiences, they’ve helped narrow down the options. For example, I know that although I experience pain in my hand as actually occurring there, the pain experience must occur elsewhere, because a surgeon once switched off my hand pain without doing anything to my hand: he merely anesthetized nerves in my shoulder. Moreover, some amputees experience phantom pain that feels as though it’s in their nonexistent hand. As another example, I once noticed that when I looked only with my right eye, part of my visual field was missing—a doctor determined that my retina was coming loose and reattached it. In contrast, patients with certain brain lesions experience hemineglect, where they too miss information from half their visual field, but aren’t even aware that it’s missing—for example, failing to notice and eat the food on the left half of their plate. It’s as if consciousness about half of their world has disappeared. But are those damaged brain areas supposed to generate the spatial experience, or were they merely feeding spatial information to the sites of consciousness, just as my retina did?

The pioneering U.S.-Canadian neurosurgeon Wilder Penfield found in the 1930s that his neurosurgery patients reported different parts of their body being touched when he electrically stimulated specific brain areas in what’s now called the somatosensory cortex (figure 8.3).9 He also found that they involuntarily moved different parts of their body when he stimulated brain areas in what’s now called the motor cortex. But does that mean that information processing in these brain areas corresponds to consciousness of touch and motion?

Fortunately, modern technology is now giving us much more detailed clues. Although we’re still nowhere near being able to measure every single firing of all of your roughly hundred billion neurons, brain-reading technology is advancing rapidly, involving techniques with intimidating names such as fMRI, EEG, MEG, ECoG, ePhys and fluorescent voltage sensing. fMRI, which stands for functional magnetic resonance imaging, measures the magnetic properties of hydrogen nuclei to make a 3-D map of your brain roughly every second, with millimeter resolution. EEG (electroencephalography) and MEG (magnetoencephalography) measure the electric and magnetic field outside your head to map your brain thousands of times per second, but with poor resolution, unable to distinguish features smaller than a few centimeters. If you’re squeamish, you’ll appreciate that these three techniques are all noninvasive. If you don’t mind opening up your skull, you have additional options. ECoG (electrocorticography) involves placing say a hundred wires on the surface of your brain, while ePhys (electrophysiology) involves inserting microwires, which are sometimes thinner than a human hair, deep into the brain to record voltages from as many as a thousand simultaneous locations. Many epileptic patients spend days in the hospital while ECoG is used to figure out what part of their brain is triggering seizures and should be resected, and kindly agree to let neuroscientists perform consciousness experiments on them in the meantime. Finally, fluorescent voltage sensing involves genetically manipulating neurons to emit flashes of light when firing, enabling their activity to be measured with a microscope. Out of all the techniques, it has the potential to rapidly monitor the largest number of neurons, at least in animals with transparent brains—such as the C. elegans worm with its 302 neurons and the larval zebrafish with its about 100,000.

Although Francis Crick warned Christof Koch about studying consciousness, Christof refused to give up and and eventually won Francis over. In 1990, they wrote a seminal paper about what they called “neural correlates of consciousness” (NCCs), asking which specific brain processes corresponded to conscious experiences. For thousands of years, thinkers had had access to the information processing in their brains only via their subjective experience and behavior. Crick and Koch pointed out that brain-reading technology was suddenly providing independent access to this information, allowing scientific study of which information processing corresponded to what conscious experience. Sure enough, technology-driven measurements have by now turned the quest for NCCs into quite a mainstream part of neuroscience, one whose thousands of publications extend into even the most prestigious journals.10 What are the conclusions so far? To get a flavor for NCC detective work, let’s first ask whether your retina is conscious, or whether it’s merely a zombie system that records visual information, processes it and sends it on to a system downstream in your brain where your subjective visual experience occurs. In the left panel of figure 8.4, which square is darker: the one labeled A or B? A, right? No, they’re in fact identically colored, which you can verify by looking at them through small holes between your fingers. This proves that your visual experience can’t reside entirely in your retina, since if it did, they’d look the same.

Now look at the right panel of figure 8.4. Do you see two women or a vase? If you look long enough, you’ll subjectively experience both in succession, even though the information reaching your retina remains the same. By measuring what happens in your brain during the two situations, one can tease apart what makes the difference—and it’s not the retina, which behaves identically in both cases.

The death blow to the conscious-retina hypothesis comes from a technique called “continuous flash suppression” pioneered by Christof Koch, Stanislas Dehaene and collaborators: it’s been discovered that if you make one of your eyes watch a complicated sequence of rapidly changing patterns, then this will distract your visual system to such an extent that you’ll be completely unaware of a still image shown to the other eye.11 In summary, you can have a visual image in your retina without experiencing it, and you can (while dreaming) experience an image without it being on your retina. This proves that your two retinas don’t host your visual consciousness any more than a video camera does, even though they perform complicated computations involving over a hundred million neurons.

NCC researchers also use continuous flash suppression, unstable visual/auditory illusions and other tricks to pinpoint which of your brain regions are responsible for each of your conscious experiences. The basic strategy is to compare what your neurons are doing in two situations where essentially everything (including your sensory input) is the same—except your conscious experience. The parts of your brain that are measured to behave differently are then identified as NCCs.

Such NCC research has proven that none of your consciousness resides in your gut, even though that’s the location of your enteric nervous system with its whopping half-billion neurons that compute how to optimally digest your food; feelings such as hunger and nausea are instead produced in your brain. Similarly, none of your consciousness appears to reside in the brainstem, the bottom part of the brain that connects to the spinal cord and controls breathing, heart rate and blood pressure. More shockingly, your consciousness doesn’t appear to extend to your cerebellum (figure 8.3), which contains about two-thirds of all your neurons: patients whose cerebellum is destroyed experience slurred speech and clumsy motion reminiscent of a drunkard, but remain fully conscious.

The question of which parts of your brain are responsible for consciousness remains open and controversial. Some recent NCC research suggests that your consciousness mainly resides in a “hot zone” involving the thalamus (near the middle of your brain) and the rear part of the cortex (the outer brain layer consisting of a crumpled-up six-layer sheet which, if flattened out, would have the area of a large dinner napkin).12 This same research controversially suggests that the primary visual cortex at the very back of the head is an exception to this, being as unconscious as your eyeballs and your retinas.

When Is Consciousness?

So far, we’ve looked at experimental clues regarding what types of information processing are conscious and where consciousness occurs. But when does it occur? When I was a kid, I used to think that we become conscious of events as they happen, with absolutely no time lag or delay. Although that’s still how it subjectively feels to me, it clearly can’t be correct, since it takes time for my brain to process the information that enters via my sensory organs. NCC researchers have carefully measured how long, and Christof Koch’s summary is that it takes about a quarter of a second from when light enters your eye from a complex object until you consciously perceive seeing it as what it is.13 This means that if you’re driving down a highway at fifty-five miles per hour and suddenly see a squirrel a few meters in front of you, it’s too late for you to do anything about it, because you’ve already run over it!

In summary, your consiousness lives in the past, with Christof Koch estimating that it lags behind the outside world by about a quarter second. Intriguingly, you can often react to things faster than you can become conscious of them, which proves that the information processing in charge of your most rapid reactions must be unconscious. For example, if a foreign object approaches your eye, your blink reflex can close your eyelid within a mere tenth of a second. It’s as if one of your brain systems receives ominous information from the visual system, computes that your eye is in danger of getting struck, emails your eye muscles instructions to blink and simultaneously emails the conscious part of your brain saying “Hey, we’re going to blink.” By the time this email has been read and included into your conscious experience, the blink has already happened.

Indeed, the system that reads that email is continually bombarded with messages from all over your body, some more delayed than others. It takes longer for nerve signals to reach your brain from your fingers than from your face because of distance, and it takes longer for you to analyze images than sounds because it’s more complicated—which is why Olympic races are started with a bang rather than with a visual cue. Yet if you touch your nose, you consciously experience the sensation on your nose and fingertip as simultaneous, and if you clap your hands, you see, hear and feel the clap at exactly the same time.14 This means that your full conscious experience of an event isn’t created until the last slowpoke email reports have trickled in and been analyzed.

A famous family of NCC experiments pioneered by physiologist Benjamin Libet has shown that the sort of actions you can perform unconsciously aren’t limited to rapid responses such as blinks and ping-pong smashes, but also include certain decisions that you might attribute to free will—brain measurements can sometimes predict your decision before you become conscious of having made it.

Theories of Consciousness

We’ve just seen that, although we still don’t understand consciousness, we have amazing amounts of experimental data about various aspects of it. But all this data comes from brains, so how can it teach us anything about consciousness in machines? This requires a major extrapolation beyond our current experimental domain. In other words, it requires a theory.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.