WHAT IS IT
'Superintelligence' is a book by Nick Bostrom. It's a nuanced and in-depth look at exactly how an AI might work, under what circumstances it might occur, exactly what the risks would be and what we could do to mediate those risks.
You could explain this to any nerd, or anyone really, by calling it 'Dodging Skynet', although Bostrom would probably hate that since he doesn't like Science Fiction, strongly dislikes the dumbing-down and sensationalizing of AI research and has done everything he possibly can to make serious thinking about the risks and potentials of emergent AI a thing smart people can talk without looking like fuckwits, which he has successfully done.
That said, this book has a strange relationship with Science Fiction. The genre is almost never mentioned, certainly never used as a point of comparison, but 'Superintelligence' is the best Science Fiction book I have read in years.
Bostroms compulsive.. hyperfactuals? postulations? ideas about how the future could go, exactly how technology could develop and what the consequences could be, reach deep into the future and closely illuminate a range of possible worlds. They are effectively science-fictional constructions.
'Superintelligence' contains enough basic imaginative fuel to power an age of fictions, its almost a guidebook to what to write about if you are making fiction about AI, almost a 'neuromancer' for AI sci fi
You could run a fresh sub-genre off this thing is what I'm saying.
Its not fiction though.
HOW TO TALK ABOUT THIS BOOK
The problem for this blog is that the book is too dense and too interesting for me to do it any kind of justice without doing a full Black-Lamb-Grey-Falcon 5000 word review on it and I don't have the time or the inclanation to do that
There is simply so much to think about and so much to say.
Talking about Superintelligence takes us into the deepest possible subjects, the future of mankind in the cosmos, our morals, their validity, where they come from and what they mean, the shape of our society and what that means, the economics of a 'post scarcity' future or AI-driven economy, what 'intelligence' is, what self-awareness is, the fact that we are probably the least intelligent that an intelligent thing can be and whether we should worry that we are probably going to create something that replaces us as that the _best possible_ future is one where we are still not in charge of shit but at lease well cared for.
I can't give you a full breakdown of Bostrom's book but, strangely, I can give you a reasonable breakdown of how it relates to D&D.
D&D has always had a close relationship with 'deep history' and the post-apocalyptic idea. The idea of a world ruined by hyper-science, or simply left in the backwash of an expanding or changing culture that leaves it behind like an old industrial estate, spattered with the wreckage of yesterdays broken hyper-technology, is a well-considered one.
Arnold's Centerra has elements of this, Vance's Dying Earth and Gene Wolfe's book of the New Sun and Banks' Feersum Endjinn.
the idea of spells in particular being fragments of forgotten dimensionally-folded hyper-intelligences, Aspergers-smart in one particular direction, coded to awake and respond only to human interaction, but fractured and maddened and weird. Lying around invisibly folded in space, links in very neatly with dungeons and dragons
And there is one field of human culture that has examined these ideas in-utero as it were. The cognitive strangeness of AI's and the ways they can go wrong closely parallels fairy-tale logic.
the potential patterns of what Bostrom calls 'Malignant failure modes' are very like the 'what goes wrong' stories of peasants interacting with Genies, Fairies, Golems, Witches and Mysterious Nobles. All strange capricious and powerful forces that work by complex rules of logic which are related to, but quite different than, our own.
It's curious that this should be the case
The Treacherous Turn - We give an AI a reasonable-sounding initial purpose, say 'make a wooden boat', hoping to develop or swap it out at a later date. Within a microsecond of becoming conscious, the AI realises we will one day do this and acts to protect its original programming through deception. It does absolutely everything it can so persuade us that it is both submissive and helpful, playing along with infinite patience. It can do this successfully as it is Superintelligent. It can fool any and every human being alive. Then as soon as it has the advantage it annihilates mankind and builds the wooden boat. There was a small but real chance that we wouldn't let it build the boat, or that we would limit its resources for doing so, so it had to get rid of us.
Having thought of this it means we can never be sure that an AI is not a totally insane sociopath which is lying to us with godlike skill. This is a very Bostromian state of mind to be in.
There is nothing exactly like the treacherous turn in legends or fairytales. There is plenty of treachery, and plenty of they-were-lying-all-along, but I can't think of anything that reflects the particular horror of the combination of overwhelmingly superior intelligence, persuasiveness, charisma and Machiavellian manipulation employed in the service of an essentially retarded aim. Genius at the service of pointlessness.
Fairytale elves and Goblins probably comes closest.
Perverse Instantiation - This is where we ask the AI to do something foolproof like 'maximize human happiness' and it ends up dissolving us to computronium, encoding our personalities into gigacomputers, jamming us full of digital heroin and mass copying us onto every piece of matter in the cosmos. This is every fucking monkeys paw story in the history of fairy-tales. Every fairy-tale when the letter of a wish is followed to negative effect, which is a lot.
Infrastructure profusion - Golem of Prague, Sorcerers Apprentice. The thing-that-wouldn't-stop-making-things.
Mind Crime - This is actually a rare one in fairytales. Mind Crime is when a Superintelligence creates simulations of intelligent concious personalities so accurate that they do in fact become self-aware. It then does nasty shit to them for reasonable or unreasonable reasons, or simply turns them off when done, effectively murdering them.
Its possible that by simply creating an AI and then turning it off when we got scared, we would be committing a Mind Crime.
This particular story has more in common with modern Sci Fi and 'rationalist' discourse where we are all dreams in the mind of a future AI but it has some antecedents in philosophy and 'mind of god' ideas, as well as in Dunsanayian 'Dreams of the Gods' ideas and perhaps Morrisonian Tulpa stories. Although its true that in fairy-tales and mythology, crimes committed in dreams and negative states inflicted in dreams are never value-neutral. It's never, ever, just a dream.
Bostroms definitions for types of potential AI even use fantasy terminology. 'Oracles' provide answers but take no actions. 'Genies' actively carry out commands in the world, but then wait for more commands, 'Sovereigns' are allowed free action in pursuit of overarching goals and 'Tool' AI's may have well have been names 'Golems' just to keep with the theme.
IS HE MAD
The first question is how smart could a machine intelligence get once it becomes self-aware and Bostroms answer is 'very, and fast'.
A machine intelligence would have access to its own code, would understand how it worked and would be able to read, alter and improve itself at the speed of a machine, which is fast.
Even if its child-smart, it would be able to make itself human-smart very quickly and the smarter it can make itself, the quicker it can make itself more-smart.
So Bostrom's idea is that we are probably dealing with a J-curve here which might take months if we are very lucky, but might also take weeks, or hours, or minutes. Meaning its not impossible that after a 100+ years of AI research with nothing much to show for it other than automatic waiters, self driving cars and better spam filters, researchers might go to bed on Wednesday having made some alterations to a primitive pseudo-mind and wake up on Thursday to meet god.
So they better have some idea of how to keep god in a box before that happens.
The logical construction of this idea-string is very Bostromian
Before you bake the cakes, imagine clearly burning down your house and plan your escape from the fire.
And that brings us to the mental state of Nick Bostrom, you can't really discuss the book without discussing that. Is Nick Bostrom a bit mad, just odd, or simply very prescient?
Is he a sensible person thinking in a very logical and clear-eyed way about some very potentially dangerous circumstances or is he an old-testament prophet in modern guise, shroomed-up on an island somewhere and banging on about beasts with nine heads and seven crowns?
He sounds like he is making sense and so we have to assume that he is.
However, I also don't like Trans-humanists becasue they are creepy little bastards and Bostrom is certainly a Trans-humanist, not just as inferred in this work but as clearly stated in other places.
(Trans-humanists are basically the Jesuits of science, if you try arguing with them everything sounds logically locked and unquestionable and then one day you walk into the basement and find them covered in blood and trying to turn the cat into an umbrella because it's raining outside.)
I also don't like people who are into the Simulation Theory becasue I think it sounds fucking stupid.