سرفصل های مهم
چه چیزی می خواهید؟
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
What Do You Want?
You began this chapter pondering where you want the current AGI race to lead. Now that we’ve explored a broad range of scenarios together, which ones appeal to you and which ones do you think we should try hard to avoid? Do you have a clear favorite? Please let me and fellow readers know at http://AgeOfAi.org, and join the discussion!
The scenarios we’ve covered obviously shouldn’t be viewed as a complete list, and many are thin on details, but I’ve tried hard to be inclusive, spanning the full spectrum from high-tech to low-tech to no-tech and describing all the central hopes and fears expressed in the literature.
One of the most fun parts of writing this book has been hearing what my friends and colleagues think of these scenarios, and I’ve been amused to learn that there’s no consensus whatsoever. The one thing everybody agrees on is that the choices are more subtle than they may initially seem. People who like any one scenario tend to simultaneously find some aspect(s) of it bothersome. To me, this means that we humans need to continue and deepen this conversation about our future goals, so that we know in which direction to steer. The future potential for life in our cosmos is awe-inspiringly grand, so let’s not squander it by drifting like a rudderless ship, clueless about where we want to go!
Just how grand is this future potential? No matter how advanced our technology gets, the ability for Life 3.0 to improve and spread through our cosmos will be limited by the laws of physics—what are these ultimate limits, during the billions of years to come? Is our Universe teeming with extraterrestrial life right now, or are we alone? What happens if different expanding cosmic civilizations meet? We’ll tackle these fascinating questions in the next chapter.
THE BOTTOM LINE:
•The current race toward AGI can end in a fascinatingly broad range of aftermath scenarios for upcoming millennia.
•Superintelligence can peacefully coexist with humans either because it’s forced to (enslaved-god scenario) or because it’s “friendly AI” that wants to (libertarian-utopia, protector-god, benevolent-dictator and zookeeper scenarios).
•Superintelligence can be prevented by an AI (gatekeeper scenario) or by humans (1984 scenario), by deliberately forgetting the technology (reversion scenario) or by lack of incentives to build it (egalitarian-utopia scenario).
•Humanity can go extinct and get replaced by AIs (conqueror and descendant scenarios) or by nothing (self-destruction scenario).
•There’s absolutely no consensus on which, if any, of these scenarios are desirable, and all involve objectionable elements. This makes it all the more important to continue and deepen the conversation around our future goals, so that we don’t inadvertently drift or steer in an unfortunate direction.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.