سرفصل های مهم
جنجال افسانه ها
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
Controversy Myths
Another common misconception is that the only people harboring concerns about AI and advocating AI-safety research are Luddites who don’t know much about AI. When Stuart Russell mentioned this during his Puerto Rico talk, the audience laughed loudly. A related misconception is that supporting AI-safety research is hugely controversial. In fact, to support a modest investment in AI-safety research, people don’t need to be convinced that risks are high, merely non-negligible, just as a modest investment in home insurance is justified by a non-negligible probability of the home burning down.
My personal analysis is that the media have made the AI-safety debate seem more controversial than it really is. After all, fear sells, and articles using out-of-context quotes to proclaim imminent doom can generate more clicks than nuanced and balanced ones. As a result, two people who only know about each other’s positions from media quotes are likely to think they disagree more than they really do. For example, a techno-skeptic whose only knowledge about Bill Gates’ position comes from a British tabloid may mistakenly think he believes superintelligence to be imminent. Similarly, someone in the beneficial-AI movement who knows nothing about Andrew Ng’s position except his above-mentioned quote about overpopulation on Mars may mistakenly think he doesn’t care about AI safety. In fact, I personally know that he does—the crux is simply that because his timeline estimates are longer, he naturally tends to prioritize short-term AI challenges over long-term ones.
Myths About What the Risks Are
I rolled my eyes when seeing this headline in the Daily Mail:3 “Stephen Hawking Warns That Rise of Robots May Be Disastrous for Mankind.” I’ve lost count of how many similar articles I’ve seen. Typically, they’re accompanied by an evil-looking robot carrying a weapon, and suggest that we should worry about robots rising up and killing us because they’ve become conscious and/or evil. On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that my AI colleagues don’t worry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil and robots, respectively.
If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car, or is it like an unconscious zombie without any subjective experience? Although this mystery of consciousness is interesting in its own right, and we’ll devote chapter 8 to it, it’s irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AI does, not how it subjectively feels.
The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. You’re probably not an ant hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.
The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it’s precisely its goals in this narrow sense that trouble you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim “I’m not worried, because machines can’t have goals!” I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with shiny red eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned intelligence needs no robotic body, merely an internet connection—we’ll explore in chapter 4 how this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate myriad humans to unwittingly do its bidding, as in William Gibson’s science fiction novel Neuromancer.
The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we’re stronger, but because we’re smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.
Let’s summarizes all of these common misconceptions, so that we can dispense with them once and for all and focus our discussions with friends and colleagues on the many legitimate controversies—which, as we’ll see, there’s no shortage of!
Myth super intelligence by 2100 is inevitable, myth, super intelligence by 2100 is impossible, fact, it may happen in decades, centuries, or never AI experts disagree, and we simply don’t know, myth only Luddites worry about AI. fact, many top AI researchers are concerned. mythical worry AI turning evil mythical worry AI turning conscious. actual worry AI turning competent with goals misaligned with hours. myth Robots are the main concern, fact misaligned intelligence is the main concern it needs nobody only an Internet connection, myth, AI can’t control humans, Fact intelligence enables control. we control tigers by being smarter. Myth machines can’t have goals. fact a heatseeking missile as a goal. Mythical worry. super intelligence is just years away. Panic actual worry it’s at least decades away, but it may take that long to make it safe. Plan ahead
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.