سلاح های خودکار

کتاب: زندگی 3.0 / فصل 15

زندگی 3.0

41 فصل

سلاح های خودکار

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زیبوک»

این فصل را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زیبوک» بخوانید

دانلود اپلیکیشن «زیبوک»

فایل صوتی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی فصل

AUTONOMOUS WEAPONS:

An Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is practically if not legally feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they’ll become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

To make it harder to dismiss our concerns as coming only from pacifist tree-huggers, I wanted to get our letter signed by as many hardcore AI researchers and roboticists as possible. The International Campaign for Robotic Arms Control had previously amassed hundreds of signatories who called for a ban on killer robots, and I suspected that we could do even better. I knew that professional organizations would be reluctant to share their massive member email lists for a purpose that could be construed as political, so I scraped together lists of researchers’ names and institutions from online documents and advertised the task of finding their email addresses on MTurk—the Amazon Mechanical Turk crowdsourcing platform. Most researchers have their email addresses listed on their university websites, and twenty-four hours and $54 later, I was the proud owner of a mailing list of hundreds of AI researchers who’d been successful enough to be elected Fellows of the Association for the Advancement of Artificial Intelligence (AAAI). One of them was the British-Australian AI professor Toby Walsh, who kindly agreed to email everyone else on the list and help spearhead our campaign. MTurk workers around the world tirelessly produced additional mailing lists for Toby, and before long, over 3,000 AI researchers and robotics researchers had signed our open letter, including six past AAAI presidents and AI industry leaders from Google, Facebook, Microsoft and Tesla. An army of FLI volunteers worked tirelessly to validate the signatory lists, removing spoof entries such as Bill Clinton and Sarah Connor. Over 17,000 others signed too, including Stephen Hawking, and after Toby organized a press conference about this at the International Joint Conference of Artificial Intelligence, it became a major news story around the world.

Because biologists and chemists once took a stand, their fields are now known mainly for creating beneficial medicines and materials rather than biological and chemical weapons. The AI and robotics communities had now spoken as well: the letter signatories also wanted their fields to be known for creating a better future, not for creating new ways of killing people. But will the main future use of AI be civilian or military? Although we’ve spent more pages in this chapter on the former, we may soon be spending more money on the latter—especially if a military AI arms race takes off. Civilian AI investment commitments exceeded a billion dollars in 2016, but this was dwarfed by the Pentagon’s fiscal 2017 budget request of $12–15 billion for AI-related projects, and China and Russia are likely to take note of what Deputy Defense Secretary Robert Work said when this was announced: “I want our competitors to wonder what’s behind the black curtain.”41 Should There Be an International Treaty?

Although there’s now a major international push toward negotiating some form of killer robot ban, it’s still unclear what will happen, and there’s a vibrant ongoing debate about what, if anything, should happen. Although many leading stakeholders agree that world powers should draft some form of international regulations to guide AWS research and use, there’s less agreement about what precisely should be banned and how a ban would be enforced. For example, should only lethal autonomous weapons be banned, or also ones that seriously injure people, say by blinding them? Would we ban development, production or ownership? Should a ban apply to all autonomous weapons systems or, as our letter said, only offensive ones, allowing defensive systems such as autonomous anti-aircraft guns and missile defenses? In the latter case, should AWS count as defensive even if they’re easy to move into enemy territory? And how would you enforce a treaty given that most components of an autonomous weapon have a dual civilian use as well? For example, there isn’t much difference between a drone that can deliver Amazon packages and one that can deliver bombs.

Some debaters have argued that designing an effective AWS treaty is hopelessly hard and that we therefore shouldn’t even try. On the other hand, John F. Kennedy emphasized when announcing the Moon missions that hard things are worth attempting when success will greatly benefit the future of humanity. Moreover, many experts argue that the bans on biological and chemical weapons were valuable even though enforcement proved hard, with significant cheating, because the bans caused severe stigmatization that limited their use.

I met Henry Kissinger at a dinner event in 2016, and got the opportunity to ask him about his role in the biological weapons ban. He explained how back when he was the U.S. national security adviser, he’d persuaded President Nixon that a ban would be good for U.S. national security. I was impressed by how sharp his mind and memory were for a ninety-two-year-old, and was fascinated to hear his inside perspective. Since the United States already enjoyed superpower status thanks to its conventional and nuclear forces, it had more to lose than to gain from a worldwide bioweapons arms race with uncertain outcome. In other words, if you’re already top dog, then it makes sense to follow the maxim “If it ain’t broke, don’t fix it.” Stuart Russell joined our after-dinner conversation, and we discussed how exactly the same argument can be made about lethal autonomous weapons: those who stand to gain most from an arms race aren’t superpowers but small rogue states and non-state actors such as terrorists, who gain access to the weapons via the black market once they’ve been developed.

Once mass-produced, small AI-powered killer drones are likely to cost little more than a smartphone. Whether it’s a terrorist wanting to assassinate a politician or a jilted lover seeking revenge on his ex-girlfriend, all they need to do is upload their target’s photo and address into the killer drone: it can then fly to the destination, identify and eliminate the person, and self-destruct to ensure that nobody knows who was responsible. Alternatively, for those bent on ethnic cleansing, it can easily be programmed to kill only people with a certain skin color or ethnicity. Stuart envisions that the smarter such weapons get, the less material, firepower and money will be needed per kill. For example, he fears bumblebee-sized drones that kill cheaply using minimal explosive power by shooting people in the eye, which is soft enough to allow even a small projectile to continue into the brain. Or they might latch on to the head with metal claws and then penetrate the skull with a tiny shaped charge. If a million such killer drones can be dispatched from the back of a single truck, then one has a horrifying weapon of mass destruction of a whole new kind: one that can selectively kill only a prescribed category of people, leaving everybody and everything else unscathed.

A common counterargument is that we can eliminate such concerns by making killer robots ethical—for example, so that they’ll only kill enemy soldiers. But if we worry about enforcing a ban, then how would it be easier to enforce a requirement that enemy autonomous weapons be 100% ethical than to enforce that they aren’t produced in the first place? And can one consistently claim that the well-trained soldiers of civilized nations are so bad at following the rules of war that robots can do better, while at the same time claiming that rogue nations, dictators and terrorist groups are so good at following the rules of war that they’ll never choose to deploy robots in ways that violate these rules?

Cyberwar

Another interesting military aspect of AI is that it may let you attack your enemy even without building any weapons of your own, through cyberwarfare. As a small prelude to what the future may bring, the Stuxnet worm, widely attributed to the U.S. and Israeli governments, infected fast-spinning centrifuges in Iran’s nuclear-enrichment program and caused them to tear themselves apart. The more automated society gets and the more powerful the attacking AI becomes, the more devastating cyberwarfare can be. If you can hack and crash your enemy’s self-driving cars, auto-piloted planes, nuclear reactors, industrial robots, communication systems, financial systems and power grids, then you can effectively crash his economy and cripple his defenses. If you can hack some of his weapons systems as well, even better.

We began this chapter by surveying how spectacular the near-term opportunities are for AI to benefit humanity—if we manage to make it robust and unhackable. Although AI itself can be used to make AI systems more robust, thereby aiding the cyberwar defense, AI can clearly aid the offense as well. Ensuring that the defense prevails must be one of the most crucial short-term goals for AI development—otherwise all the awesome technology we build can be turned against us!

Jobs and Wages

So far in this chapter, we’ve mainly focused on how AI will affect us as consumers, by enabling transformative new products and services at affordable prices. But how will it affect us as workers, by transforming the job market? If we can figure out how to grow our prosperity through automation without leaving people lacking income or purpose, then we have the potential to create a fantastic future with leisure and unprecedented opulence for everyone who wants it. Few people have thought longer and harder about this than economist Erik Brynjolfsson, one of my MIT colleagues. Although he’s always well-groomed and impeccably dressed, he has Icelandic heritage, and I sometimes can’t help imagine that he only recently trimmed back a wild red Viking beard and mane to blend in at our business school. He certainly hasn’t trimmed back his wild ideas, and he calls his optimistic job-market vision “Digital Athens.” The reason that the Athenian citizens of antiquity had lives of leisure where they could enjoy democracy, art and games was mainly that they had slaves to do much of the work. But why not replace the slaves with AI-powered robots, creating a digital utopia that everyone can enjoy? Erik’s AI-driven economy would not only eliminate stress and drudgery and produce an abundance of everything we want today, but it would also supply a bounty of wonderful new products and services that today’s consumers haven’t yet realized that they want.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.