سرفصل های مهم
ماجراجویی پورتو ریکو
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
The Puerto Rico Adventure
This marked the beginning of an amazing adventure, which still continues. As I mentioned in chapter 1, we held regular brainstorming meetings at our house with dozens of idealistic students, professors and other local thinkers, where the top-rated ideas transformed into projects—the first being that AI op-ed from chapter 1 with Stephen Hawking, Stuart Russell and Frank Wilczek that helped ignite the public debate. In parallel with the baby steps of setting up a new organization (such as incorporating, recruiting an advisory board and launching a website), we held a fun launch event in front of a packed MIT auditorium, at which Alan Alda explored the future of technology with leading experts.
We focused the rest of the year on pulling together the Puerto Rico conference which, as I mentioned in chapter 1, aimed to engage the world’s leading AI researchers in the discussion of how to keep AI beneficial. Our goal was to shift the AI-safety conversation from worrying to working: from bickering about how worried to be, to agreeing on concrete research projects that could be started right away to maximize the chance of a good outcome. To prepare, we collected promising AI-safety research ideas from around the world and sought community feedback on our growing project list. With the help of Stuart Russell and a group of hardworking young volunteers, especially Daniel Dewey, János Krámar and Richard Mallah, we distilled these research priorities into a document to be discussed at the conference.1 Building consensus that there was lots of valuable AI-safety research to be done would, we hoped, encourage people to start doing such research. The ultimate moonshot triumph would be if it could even persuade someone to fund it since, so far, there had been essentially no support for such work from government funding agencies.
Enter Elon Musk. On August 2, he appeared on our radar by famously tweeting “Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes.” I reached out to him about our efforts, and got to speak with him by phone a few weeks later. Although I felt quite nervous and starstruck, the outcome was outstanding: he agreed to join our FLI scientific advisory board, to attend our conference and potentially to fund a first-ever AI-safety research program to be announced in Puerto Rico. This electrified all of us at FLI, and made us redouble our efforts to create an awesome conference, identify promising research topics and build community support for them.
I finally got to meet Elon in person for further planning when he came to MIT two months later for a space symposium. It felt very strange to be alone with him in a small green room just moments after he’d enraptured over a thousand MIT students like a rock star, but after a few minutes, all I could think of was our joint project. I instantly liked him. He radiated sincerity, and I was inspired by how much he cared about the long-term future of humanity—and how he audaciously turned his aspiration into actions. He wanted humanity to explore and settle our Universe, so he started a space company. He wanted sustainable energy, so he started a solar company and an electric-car company. Tall, handsome, eloquent and incredibly knowledgeable, it was easy to understand why people listened to him.
Unfortunately, this MIT event also taught me how fear-driven and divisive media can be. Elon’s stage performance consisted of an hour of fascinating discussion about space exploration, which I think would have made great TV. At the very end, a student asked him an off-topic question about AI. His answer included the phrase “with artificial intelligence, we are summoning the demon,” which became the only thing that most media reported—and generally out of context. It struck me that many journalists were inadvertently doing the exact opposite of what we were trying to accomplish in Puerto Rico. Whereas we wanted to build community consensus by highlighting the common ground, the media had an incentive to highlight the divisions. The more controversy they could report, the greater their Nielsen ratings and ad revenue. Moreover, whereas we wanted to help people from across the spectrum of opinions to come together, get along and understand each other better, media coverage inadvertently made people across the opinion spectrum upset at one another, fueling misunderstandings by publishing only their most provocative-sounding quotes without context. For this reason, we decided to ban journalists from the Puerto Rico meeting and impose the “Chatham House Rule,” which prohibits participants from subsequently revealing who said what.* Although our Puerto Rico conference ended up being a success, it didn’t come easy. The countdown mostly required diligent prep work, for example me phoning or skyping large numbers of AI researchers to assemble a critical mass of participants to attract the other attendees, and there were also dramatic moments—such as when I got up by 7 a.m. on December 27 to reach Elon on a lousy phone connection to Uruguay, and was told “I don’t think this is gonna work.” He was concerned that an AI-safety research program might provide a false sense of security, enabling reckless researchers to forge ahead while paying lip service to safety. But then, despite the sound incessantly cutting out, we extensively talked through the huge benefits of mainstreaming the topic and getting more AI researchers working on AI safety. After the call dropped, he sent me one of my favorite emails ever: “Lost the call at the end there. Anyway, docs look fine. I’m happy to support the research with $5M over three years. Maybe we should make it $10M?” Four days later, 2015 got off to a good start for Meia and me as we briefly relaxed before the meeting, dancing in the new year on a Puerto Rico beach illuminated by fireworks. The conference got off to a great start too: there was remarkable consensus that more AI-safety research was needed, and based on further input from the conference participants, that research priorities document we’d worked so hard on was improved and finalized. We passed around that safety-research-endorsing open letter from chapter 1, and were delighted that almost everyone signed it.
Meia and I had a magical meeting with Elon in our hotel room where he blessed the detailed plans for our grants program. She was touched by how down-to-earth and candid he was about his personal life, and how much interest he took in us. He asked us how we met, and liked Meia’s elaborate story. The next day, we filmed an interview with him about AI safety and why he wanted to support it and everything seemed on track.2
The conference climax, Elon’s donation announcement, was scheduled for 7 p.m. on Sunday, January 4, 2015, and I’d been so tense about it that I’d tossed and turned in my sleep the night before. And then, just fifteen minutes before we were supposed to head to the session where it would happen, we hit a snag! Elon’s assistant called and said that it looked like Elon might not be able to go through with the announcement, and Meia said she’d never seen me look more stressed or disappointed. Elon finally came by, and I could hear the seconds counting down to the session start as we sat there and talked. He explained that they were just two days away from a crucial SpaceX rocket launch where they hoped to pull off the first-ever successful landing of the first stage on a drone ship, and that since this was a huge milestone, the SpaceX team didn’t want to distract from it with concurrent media splashes involving him. Anthony Aguirre, cool and levelheaded as always, pointed out that this meant that nobody wanted media attention for this, neither Elon nor the AI community. We arrived a few minutes late to the session I was moderating, but we had a plan: no dollar amount would get mentioned, to ensure that the announcement wasn’t newsworthy, and I’d lord Chatham House over everyone to keep Elon’s announcement secret from the world for nine days if his rocket reached the space station, regardless of whether the landing succeeded; he said he’d need even more time if the rocket exploded on launch.
The countdown to the announcement finally reached zero. The superintelligence panelists that I’d moderated still sat there next to me onstage in their chairs: Eliezer Yudkowsky, Elon Musk, Nick Bostrom, Richard Mallah, Murray Shanahan, Bart Selman, Shane Legg and Vernor Vinge. People gradually stopped applauding, but the panelists remained seated, because I’d told them to stay without explaining why. Meia later told me that her pulse reached the stratosphere around now, and that she clutched Viktoriya Krakovna’s calming hand under the table. I smiled, knowing that this was the moment we’d worked, hoped and waited for.
I was very happy that there was such consensus at the meeting that more research was needed for keeping AI beneficial, I said, and that there were so many concrete research directions we could work on right away. But there had been talk of serious risks in this session, I added, so it would be nice to raise our spirits and get into an upbeat mood before heading out to the bar and the conference banquet that had been set up outside. “And I’m therefore giving the microphone to…Elon Musk!” I felt that history was in the making as Elon took the mic and announced that he would donate a large amount of money to AI-safety research. Unsurprisingly, he brought down the house. As planned, he didn’t mention how much, but I knew that it was a cool $10 million, as we’d agreed.
Meia and I went to visit our parents in Sweden and Romania after the conference, and with bated breath, we watched the live-streamed rocket launch with my dad in Stockholm. The landing attempt unfortunately ended with what Elon euphemistically calls an RUD, “rapid unscheduled disassembly,” and pulling off a successful ocean landing took his team another fifteen months.3 However, all the satellites were successfully launched into orbit, as was our grants program via a tweet by Elon to his millions of followers.4 Mainstreaming AI Safety
A key goal of the Puerto Rico conference had been to mainstream AI-safety research, and it was exhilarating to see this unfold in multiple steps. First there was the meeting itself, where many researchers started feeling comfortable engaging with the topic once they realized that they were part of a growing community of peers. I was deeply touched by encouragement from many participants. For example, Cornell University AI professor Bart Selman emailed me saying, “I’ve honestly never seen a better organized or more exciting and intellectually stimulating scientific meeting.” The next mainstreaming step began on January 11 when Elon tweeted “World’s top artificial intelligence developers sign open letter calling for AI-safety research,”5 linking to a sign-up page that soon racked up over eight thousand signatures, including many of the world’s most prominent AI builders. It suddenly became harder to claim that people concerned about AI safety didn’t know what they were talking about, because this now implied that a who’s who of leading AI researchers didn’t know what they were talking about. The open letter was reported by media around the world in a way that made us grateful that we’d barred journalists from our conference. Although the most alarmist word in the letter was “pitfalls,” it nonetheless triggered headlines such as “Elon Musk and Stephen Hawking Sign Open Letter in Hopes of Preventing Robot Uprising,” illustrated by murderous terminators. Of the hundreds of articles we spotted, our favorite was one mocking the others, writing that “a headline that conjures visions of skeletal androids stomping human skulls underfoot turns complex, transformative technology into a carnival sideshow.”6 Fortunately, there were also many sober news articles, and they gave us another challenge: keeping up with the torrent of new signatures, which needed to be manually verified to protect our credibility and weed out pranks such as “HAL 9000,” “Terminator,” “Sarah Jeanette Connor” and “Skynet.” For this and our future open letters, Viktoriya Krakovna and János Krámar helped organize a volunteer brigade of checkers that included Jesse Galef, Eric Gastfriend and Revathi Vinoth Kumar working in shifts, so that when Revathi went to sleep in India, she passed the baton to Eric in Boston, and so on.
The third mainstreaming step began four days later, when Elon tweeted a link to our announcement that he was donating $10 million to AI-safety research.7 A week later, we launched an online portal where researchers from around the world could apply and compete for this funding. We were able to whip the application system together so quickly only because Anthony and I had spent the previous decade running similar competitions for physics grants. The Open Philanthropy Project, a California-based charity focused on high-impact giving, generously agreed to top up Elon’s gift to allow us to give more grants. We weren’t sure how many applicants we’d get, since the topic was novel and the deadline was short. The response blew us away, with about three hundred teams from around the world asking for about $100 million. A panel of AI professors and other researchers carefully reviewed the proposals and selected thirty-seven winning teams, who were funded for up to three years. When we announced the list of winners, it marked the first time that the media response to our activities was fairly nuanced and free of killer-robot pictures. It was finally sinking in that AI safety wasn’t empty talk: there was actual useful work to be done, and lots of great research teams were rolling up their sleeves to join the effort.
The fourth mainstreaming step happened organically over the next two years, with scores of technical publications and dozens of workshops on AI safety around the world, typically as parts of mainstream AI conferences. Persistent people had tried for many years to engage the AI community in safety research, with limited success, but now things really took off. Many of these publications were funded by our grants program and we at FLI did our best to help organize and fund as many of these workshops as we could, but a growing fraction of them were enabled by AI researchers investing their own time and resources. As a result, ever more researchers learned about safety research from their own colleagues, discovering that aside from being useful, it could also be fun, involving interesting mathematical and computational problems to puzzle over.
Complicated equations aren’t everyone’s idea of fun, of course. Two years after our Puerto Rico conference, we preceded our Asilomar conference with a technical workshop where our FLI grant winners could showcase their research, and watched slide after slide with mathematical symbols on the big screen. Moshe Vardi, an AI professor at Rice University, joked that he knew we’d succeeded in establishing an AI-safety research field once the meetings got boring.
This dramatic growth of AI-safety work wasn’t limited to academia. Amazon, DeepMind, Facebook, Google, IBM and Microsoft launched an industry partnership for beneficial AI.8 Major new AI-safety donations enabled expanded research at our largest nonprofit sister organizations: the Machine Intelligence Research Institute in Berkeley, the Future of Humanity Institute in Oxford and the Centre for the Study of Existential Risk in Cambridge (UK). Further donations of $10 million or more kick-started additional beneficial-AI efforts: the Leverhulme Centre for the Future of Intelligence in Cambridge, the K&L Gates Endowment for Ethics and Computational Technologies in Pittsburgh and the Ethics and Governance of Artificial Intelligence Fund in Miami. Last but not least, with a billion-dollar commitment, Elon Musk partnered with other entrepreneurs to launch OpenAI, a nonprofit company in San Francisco pursuing beneficial AI. AI-safety research was here to stay.
In lockstep with this surge of research came a surge of opinions being expressed, both individually and collectively. The industry Partnership on AI published its founding tenets, and long reports with lists of recommendations were published by the U.S. government, Stanford University and the IEEE (the world’s largest organization of technical professionals), together with dozens of other reports and position papers from elsewhere.9
We were eager to facilitate meaningful discussion among the Asilomar attendees and learn what, if anything, this diverse community agreed on. Lucas Perry therefore took on the heroic task of reading all of those documents we’d found and extracting all their opinions. In a marathon effort initiated by Anthony Aguirre and concluded by a series of long telecons, our FLI team then attempted to group similar opinions together and strip away redundant bureaucratic verbiage to end up with a single list of succinct principles, also including unpublished but influential opinions that had been expressed more informally in talks and elsewhere. But this list still included plenty of ambiguity, contradiction and room for interpretation, so the month before the conference, we shared it with the participants and collected their opinions and suggestions for improved or novel principles. This community input produced a significantly revised principle list for use at the conference.
In Asilomar, the list was further improved in two steps. First, small groups discussed the principles they were most interested in (figure 9.4), producing detailed refinements, feedback, new principles and competing versions of old ones. Finally, we surveyed all attendees to determine the level of support for each version of each principle.
This collective process was both exhaustive and exhausting, with Anthony, Meia and I curtailing sleep and lunch time at the conference in our scramble to compile everything needed in time for the next steps. But it was also exciting. After such detailed, thorny and sometimes contentious discussions and such a wide range of feedback, we were astonished by the high level of consensus that emerged around many of the principles during that final survey, with some getting over 97% support. This consensus allowed us to set a high bar for inclusion in the final list: we kept only principles that at least 90% of the attendees agreed on. Although this meant that some popular principles were dropped at the last minute, including some of my personal favorites,10 it enabled most of the participants to feel comfortable endorsing all of them on the sign-up sheet that we passed around the auditorium. Here’s the result.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.