Download the free Kindle app and start reading Kindle books instantly on your smartphone, tablet, or computer - no Kindle device required.
Read instantly on your browser with Kindle for Web.
Using your mobile phone camera - scan the code below and download the Kindle app.
Audible sample Sample
Superintelligence MP3 CD – MP3 Audio, May 5, 2015
Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful—possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original audiobook breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
- LanguageEnglish
- PublisherAudible Studios on Brilliance Audio
- Publication dateMay 5, 2015
- Dimensions6.75 x 5.5 x 0.5 inches
- ISBN-101501227742
- ISBN-13978-1501227745
The Amazon Book Review
Book recommendations, author interviews, editors' picks, and more. Read it now.
Similar items that may ship from close to you
Editorial Reviews
Review
"Terribly important ... groundbreaking... extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole... If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever." --Olle Haggstrom, Professor of Mathematical Statistics
"Nick Bostrom's excellent book "Superintelligence" is the best thing I've seen on this topic. It is well worth a read." --Sam Altman, President of Y Combinator and Co-Chairman of OpenAI
"Worth reading.... We need to be super careful with AI. Potentially more dangerous than nukes" --Elon Musk, Founder of SpaceX and Tesla
"Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era." --Stuart Russell, Professor of Computer Science, University of California, Berkley
"This superb analysis by one of the world's clearest thinkers tackles one of humanity's greatest challenges: if future superhuman artificial intelligence becomes the biggest event in human history, then how can we ensure that it doesn't become the last?" --Professor Max Tegmark, MIT
"Valuable. The implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking" --The Economist
"There is no doubting the force of [Bostrom's] arguments...the problem is a research challenge worthy of the next generation's best mathematical talent. Human civilisation is at stake." --Clive Cookson, Financial Times
"Those disposed to dismiss an 'AI takeover' as science fiction may think again after reading this original and well-argued book." --Martin Rees, Past President, Royal Society
"Every intelligent person should read it." --Nils Nilsson, Artificial Intelligence Pioneer, Stanford University
Product details
- Publisher : Audible Studios on Brilliance Audio; Unabridged edition (May 5, 2015)
- Language : English
- ISBN-10 : 1501227742
- ISBN-13 : 978-1501227745
- Item Weight : 2.53 ounces
- Dimensions : 6.75 x 5.5 x 0.5 inches
- Best Sellers Rank: #2,373,388 in Books (See Top 100 in Books)
- #3,707 in Artificial Intelligence & Semantics
- #12,884 in Books on CD
- Customer Reviews:
About the author
NICK BOSTROM is a Professor at Oxford University, where he is the founding director of the Future of Humanity Institute. Bostrom is the world’s most cited philosopher aged 50 or under. He is the author of more than 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which sparked the global conversation about the future of AI. His work has pioneered many of the ideas that frame current thinking about humanity’s future (such as the concept of an existential risk, the simulation argument, the vulnerable world hypothesis, astronomical waste, the unilateralist’s curse, etc.), while some of his recent work concerns the moral status of digital minds. His writings have been translated into more than 30 languages; he is a repeat main-stage TED speaker; and he has been interviewed more than 1,000 times by media outlets around the world. He has been on Foreign Policy’s Top 100 Global Thinkers list twice and was included in Prospect’s World Thinkers list, the youngest person in the top 15. He has an academic background in theoretical physics, AI, and computational neuroscience as well as philosophy.
Customer reviews
Customer Reviews, including Product Star Ratings help customers to learn more about the product and decide whether it is the right product for them.
To calculate the overall star rating and percentage breakdown by star, we don’t use a simple average. Instead, our system considers things like how recent a review is and if the reviewer bought the item on Amazon. It also analyzed reviews to verify trustworthiness.
Learn more how customers reviews work on AmazonReviews with images
-
Top reviews
Top reviews from the United States
There was a problem filtering reviews right now. Please try again later.
Exceptionally logical and rational, comprehensive enough that it remains thoroughly relevant despite having been written before chatGPT.
The book stands out for its rigorous analysis and balanced perspective. Bostrom carefully navigates the reader through various scenarios where AI surpasses human intelligence, discussing both the transformative benefits and the existential risks. His writing style is scholarly yet accessible, making complex ideas about AI ethics, future forecasting, and strategic planning understandable to a broad audience.
One of the most compelling aspects of the book is its exploration of the 'control problem' - how humans could control entities far smarter than themselves. Bostrom does not shy away from the challenging philosophical and technical issues this problem presents. He also emphasizes the importance of preparatory work in AI safety research, encouraging proactive measures rather than reactive.
However, some readers might find the level of detail and theoretical nature of the discussions somewhat daunting. The book demands attentiveness and a willingness to engage with deeply philosophical and technical content. Additionally, while Bostrom presents a wide array of possibilities, the book sometimes leans more towards speculative thought than practical solutions.
"Superintelligence: Paths, Dangers, Strategies" is a seminal work in the field of AI and an essential read for anyone interested in the future of technology and its implications for humanity. Bostrom's thorough approach offers valuable insights and raises critical questions that will shape the ongoing conversation about AI and our future.
It seems to me that the estimates of possible dates for AI reaching human intelligence by AI experts are mostly far too near and Bostrom, because he has too much personally invested in transhumanist nonsense and this idea of technological transcendence, isn't clear-eyed enough to see this. If this massive advance happens at all, I think the upper bound of AI-expert estimates is the time frame we should expect it in.
He acknowledges his uncertainty constantly (Bostrom aspires to be a robot himself, constantly striving to avoid the bias that comes with overconfidence, which partly explains why the book is so weird), but I almost think that the book was pointless because of how uncertain this future is. The book is a remarkable effort though, which is why I have given it four stars. Also, because it is strangely compelling to feel like you're reading the reasoning of a pale, Norwegian, shiny-headed cyborg.
For example, in discussing the various ways in which AI might be implemented, he concludes that AI (and subsequently, super-intelligent AI) via whole brain emulation is essentially guaranteed to happen due to ever-improving scanning techniques such as MRI or electron microscopy, ever-increasing computing power, and the fact that understanding the brain is not necessary to emulate the brain. Rather, once you can scan it in enough detail, and you have enough hardware to simulate it, it can be done even if the overarching design is a black box to you (individual neurons or clusters of neurons can already be simulated, but we lack the computing power to simulate 10 billion neurons, and we lack the knowledge of how they are all connected in a human brain -- something which various scanning projects are already tackling).
However, he also concludes that due to the time it will take to achieve the necessary advances in scanning and hardware, whole brain emulation is unlikely to be how advanced AI is actually, or initially, achieved. Rather, more conventional AI programming techniques, while perhaps posing a greater need for understanding the nature of intelligence, have a much-reduced hardware requirement (and no scanning requirement) and are likely to reach fruition first.
This is just one example. He slices and dices these issues more ways than you can imagine, coming to what is, in the end, a fairly simple conclusion (if I may inelegantly paraphrase): Super-intelligent AI is coming. It might be in 10 years, maybe 20, maybe 50, but it is coming. And, it is potentially quite dangerous because, by definition, it is smarter than you. So, if it wants to do you harm, it will and there will be very little you can do about it. Therefore, by the time super-intelligent AI is possible, we better know not just how to make a super-intelligent AI, but a super-intelligent AI which shares human values and morals (or perhaps embodies human values and morals as we wish they were, since as he points out, we certainly would not want to use some peoples' values and morals as a template for an AI, and it may be hard to even agree on some such philosophical issues across widely-divergent cultures and beliefs).
This is a thought-provoking book. It raises issues that I never even would have thought of had the author not pointed them out. For example, "infrastructure proliferation" is a bizarre, yet presumably possible, way in which a super-intelligent (but in some ways, lacking common sense) AI could end life as we know it without even being malicious -- just indifferent to us while pursuing pedestrian goals in what is, to it, a perfectly logical manner.
I share the author's concerns. Human-level (much less super-intelligent) AI seems far away. So, why worry about the consequences right now? There will be plenty of time to deal with such issues as the ability to program strong AI gets closer. Right?
Maybe, maybe not. As the author also describes in detail, there are many scenarios (perhaps the most likely ones) where one day you don't have AI, and the next you do (e.g., only a single algorithm tweak was keeping the system from being intelligent and with that solved, all of the sudden your program is smarter than you -- and able to recursively improve itself so that days, or maybe hours or minutes later, it is WAY smarter than you). I hope AI researchers take heed of this book. If the ability to program goals, values, morals and common sense into a computer is not developed in parallel with the ability to create programs that dispassionately "think" at a very high level, we could have a very big problem on our hands.