Version 1
: Received: 30 September 2018 / Approved: 2 October 2018 / Online: 2 October 2018 (13:50:53 CEST)
How to cite:
Schmidt, T. Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints2018, 2018100024. https://doi.org/10.20944/preprints201810.0024.v1
Schmidt, T. Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints 2018, 2018100024. https://doi.org/10.20944/preprints201810.0024.v1
Schmidt, T. Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints2018, 2018100024. https://doi.org/10.20944/preprints201810.0024.v1
APA Style
Schmidt, T. (2018). Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence. Preprints. https://doi.org/10.20944/preprints201810.0024.v1
Chicago/Turabian Style
Schmidt, T. 2018 "Solving the AI Race: Addressing the potential pitfalls of competition towards Artificial General Intelligence" Preprints. https://doi.org/10.20944/preprints201810.0024.v1
Abstract
AGI could arise within the next decades, promising a decisive strategic advantage. This paper discusses risks, associated with the development of AGI: destabilizing effects on strategic balance, underestimating risks in the interest of victory in the race, egoistically exploiting the huge benefits by a tiny minority. Further: Developed AGI could be beyond human control. Human goals could not be implemented and an intelligence explosion to superintelligence could take place leading to a total loss of control and power. If competition for AGI is non-transparent, secret, uncontrolled and not regulated, it’s possible that risks could not be managed and would lead to catastrophic consequences. The danger corresponds to that of nuclear weapons. It is crucial that the key actors of a possible AI Race agree at an early stage on the prevention and transparent regulation of a possible AI Race - similar to measures to secure strategic stability, on arms control measures, disarmament, and prevention of the proliferation of nuclear weapons. The realization that an uncontrolled AI race can lead to the extinction of humanity - this time even independent of human will – requires analogous measures to contain, prevent, regulate and secure an AI race within the framework of AGI development.
Keywords
Artificial General Intelligence; superintelligence; decisive strategic advantage; human goals; AI race; strategic stability; nuclear weapons; regulation
Subject
Social Sciences, Political Science
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.